|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Flexible architecture
- + Satisfactory platform support
- + Several Microsoft integration points
|
- + Built for simplicity
- + Policy-based management
- + Cost-effectiveness
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Minimal data protection capabilities
- - No native dedup capabilities
- - No native encryption capabilities
|
- - Single hypervisor support
- - No stretched clustering
- - No native file services
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: HyperConverged Appliance (HCA)
Type: Hardware+Software (HCI)
Development Start: 2014
First Product Release: 2016
StarWind Software is a privately held company which started in 2008 as a spin-off from Rocket Division Software, Ltd. (founded in 2003). It initially provided free Software Defined Storage (SDS) offerings to early adopters in 2009. Sometime in 2011 the company released its product Native SAN (later rebranded to Virtual SAN). In 2015 StarWind executed a successful 'pivot shift' from software-only company to become a hardware vendor and brought Hyper-Convergence from the Enterprise level to SMB and ROBO. The new HyperConverged Appliance (HCA) allowed the company to tap into the 'long tail' of the hyper-convergence market, thanks to its simplicity and cost-efficiency. In 2016 StarWind released two more hardware products: Backup Appliance and Storage Appliance.
In June 2019 the company had a StarWind HCA install base of approximately 1,250 customers worldwide. In June 2019 there were more than 250 employees working for StarWind.
|
Name: Hyperconvergence (HC3)
Type: Hardware+Software (HCI)
Development Start: 2011
First Product Release: 2012
Scale Computing was founded in 2007 and began to ship its first SAN/NAS scale-out storage product, in 2009. Mid 2011 development started on the Hyperconvergence (HC3) platform, which was to combine the 3 foundation layers, being compute, storage and virtualization, into a single hardware appliance. HC3 was built to provide ultra simple ease-of-use and initially targeted at the SMB market. The first HC3 models were released in August 2012.
In Januari 2019 the company had an install base of more than 3,500 customers worldwide. In January 2019 there were 130+ employees working for Scale Computing.
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
Release Dates:
HCA build 13279: oct 2019
HCA build 13182: aug 2019
HCA build 12767: feb 2019
HCA build 12658: nov 2018
HCA build 12393: aug 2018
HCA build 12166: may 2018
HCA build 12146: apr 2018
HCA build 11818: dec 2017
HCA build 11456: aug 2017
HCA build 11404: jul 2017
HCA build 11156: may 2017
HCA build 10833: apr 2017
HCA build 10799: mar 2017
HCA build 10695: feb 2017
HCA build 10547: jan 2017
HCA build 9996: aug 2016
HCA build 9781: jun 2016
HCA build 9052: may 2016
HCA build 8730: nov 2015
Version 8 Release 10 StarWind software on proven Super Micro and Dell server hardware.
|
GA Release Dates:
HCOS 8.6.5: mar 2020
HCOS 8.5.3: oct 2019
HCOS 8.3.3: jul 2019
HCOS 8.1.3: mar 2019
HCOS 7.4.22: may 2018
HCOS 7.2.24: sep 2017
HCOS 7.1.11: dec 2016
HCOS 6.4.2: apr 2016
HCOS 6.0: feb 2015
HCOS 5.0: oct 2014
ICOS 4.0: aug 2012
ICOS 3.0: may 2012
ICOS 2.0: feb 2010
ICOS 1.0: feb 2009
NEW
8th Generation Scale Computing software on proven Lenovo and SuperMicro server hardware.
Scale Computing HC3s maturity has been steadily increasing ever since the first iteration by expanding its feature set with both foundational and advanced capabilities. Due to its primary focus on small- and midsized organizations, the feature set does not (yet) incorporate some of the larger enterprise capabilities.
HCOS = HyperCore Operating System
ICOS = Intelligent Clustered Operating System
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
Per Node
|
Per Node
Each Scale Computing HC3 appliance purchased consists of hardware (server+storage), software (all-inclusive) and 1 year of premium support. Optionally end-users can also request for TOR-switches as part of the solution and deployment.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
TOR = Top-of-Rack
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Per Node (all-inclusive)
Normal StarWind Virtual SAN licensing does not apply. Each StarWind HCA node comes equiped with an all-inclusive feature set. This means that a StarWind HCA solution provides unlimited capacity and scalability.
|
Per Node (all-inclusive)
There is no separate software licensing. Each node comes equiped with an all-inclusive feature set. This means that without exception all Scale Computing HC3 software capabilities are available for use.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
HC3 Cloud Unity DRaaS requires a monthly subscription that is in part based on Google Cloud Platform (GCP) resource usage (compute, storage, network). The HC3 Cloud Unity DRaaS subscription includes:
- 6 days of Active Mode testing
- Runbook outlining DR procedures
- 1 Runbook failover test and 1 separate Declaration
- Network egress equal to 12.5% of Storage
- ScaleCare Support
In addition end-users and first-time service providers can purchase a DR Planning Service (one-time fee) for onboarding.
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per Node (included)
StarWind ProActive Support is included with every HCA node that is acquired.
StarWind ProActive Support monitors the cluster health 24x7x365.
With the intial purchase, StarWind HCA comes with ProActive Support for a period of 3 years. There are options to get 5 and 7 years support. In terms of part replacement within the support contract both Next Busines Day (NBD) and 4 Hour options are available.
|
Per Node
Each appliance comes with 1 year ScaleCare Premium Support that consists of:
- 24x7x365 by telephone (US and Europe)
- 2 hour response time for critical issues
- Live chat, email support, and general phone on Mo-Fr 8AM-8PM EDST.
- Next Business Day (NBD) delivery of hardware replacement parts
ScaleCare Premium Support also provides remote installation services on the initial deployment of ScaleComputing HC3 clusters.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Storage
Management
|
Hypervisor
Compute
Storage
Networking (optional)
Data Protection
Management
Automation&Orchestration
Scale Computing is stack-oriented.
With the HC3 platform Scale Computing aims to provide all functionality required in a Private Cloud ecosystem.
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
1, 10, 25, 40, 100 GbE
StarWind hardware appliances include redundant ethernet connectivity using SFP+, SFP28, QSFP+, QSFP14, QSFP28, Base-T. StarWind recommends 10GbE or higher to avoid the network becoming a performance bottleneck.
|
1, 10 GbE
Scale Computing hardware models include redundant ethernet connectivity in an active/passive setup.
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
Medium
StarWind HCA in itself has a straight-forward techniscal architecture. However, StarWind HCA does not encompass many native data protection capabilities and data services. A complete solution design therefore requires the presence of multiple technology platforms.
|
Low
Scale Computing HC3 was developed with simplicity in mind, both from a design and a deployment perspective. The HC3 platform architecture is meant to be applicable to general virtual server infrastructure (VSI) use-cases and seeks to provide important capabilities natively. There are only a few storage building blocks to choose from, and many advanced capabilities like deduplication are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
StorageReview (oct 2019)
StorageReview (Oct 2019)
Title: 'StarWind HyperConverged Appliance Review'
Workloads: Generic profiles, SQL profiles
Benchmark Tools: VDbench
Hardware: HCA L-AF 3.2, 2-node cluster, HCA 13279
|
N/A
No Scale Computing HC3 validated test reports have been published in 2016/2017/2018/2019.
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Community Edition (forever)
Trial (up to 30 days)
Proof-of-Concept
There are 3 ways for end-user organizations to evaluate StarWind:
1. StarWind Free. A free version of the software-only Virtual SAN product can be downloaded for the StarWind website. The free version has full functionality but StarWind Management Console works only in the monitoring mode without the ability to create or manage StarWind. All the management is performed via PowerShell and set of script templates.
The free version is intended to be self-supported or community-supported on public discussion forums. This
2. StarWind Trial. The trial version has full functionality and all the management capabilities for StarWind Management Console and is limited to 30 days but can be prolonged if required.
3. Proof-of-Concept. This option also has 2 ways. First, it is possible to get StarWind HCAs that will be delivered to a potential customer location for POC. The availability of HCAs is checked according to the schedule.
Second, remote access to StarWind HCA Appliances. The availability of HCAs is checked according to the schedule.
|
Public Facing Clusters
Proof-of-Concept (PoC)
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer
Dual-Layer (secondary)
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
|
Single-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
Turnkey (very fast; highly automated)
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Scale Computing, customer deployments can be executed in hours instead of days.
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller (vSphere)
User-Space (Hyper-V)
VMware vSphere: StarWind is installed inside a virtual machine (VM) on every ESXi host. The StarWind VM runs the Windows Server operating system.
Microsoft Hyper-V: By default, StarWind is installed in 'C:\Program Files\StarWind Software' on the Windows Server operating system.
|
KVM User Space
SCRIBE runs in KVM user space. Scale Computing made a conscious decision not to make SCRIBE kernel integrated in order to avoid the risk that storage problems would cause a system panic meaning that an entire node could go down as a result.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 5.5U3-6.7
Microsoft Hyper-V 2012-2019
StarWind is actively working on supporting KVM.
|
Linux KVM-based
NEW
ScaleComputing HC3 uses its own proprietary HyperCore operating system and KVM-based hypervisor.
SCRIBE is an integral part of the Linux KVM platform to enable it to own the full software stack. As VMware and Microsoft dont allow such a tight integration, SCRIBE cannot be used with any other hypervisor platform.
Scale Computing HC3 supports a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
The Scale Computing HC3 hypervisor fully supports the following Guest operating systems:
Windows Server 2019
Windows Server 2016
Windows Server 2012 R2
Windows 10
Windows 8.1
CentOS Enterprise Linux
RHEL Enterprise Linux
Ubuntu Server
FreeBSD
SUSE Linux Enterprise
Fedora
Versions supported are versions currently supported by the operating system manufacturer.
SCRIBE = Scale Computing Reliable Independent Block Engine
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
iSCSI
NFS
SMB3
iSCSI is the native StarWind HCA storage protocol, as StarWind HCA provides block-based storage.
NFS can be used as the storage protocol in VMware vSphere environments by leveraging the File Server role that the Windows OS provides.
SMB3 can be used as the storage protocol in Microsoft Hyper-V environments by leveraging the File Server role that the Windows OS provides.
In both VMware vSphere and Microsoft Hyper-V environments, iSCSI is used as a protocol to provide block-level storage access. It allows consolidating storage from multiple servers providing it as highly available storage to target servers. In the case of vSphere, VMware iSCSI Initiator allows connecting StarWind iSCSI devices to ESXi hosts and further create datastores on them. In the case of Hyper-V, Microsoft iSCSI Initiator is utilized to connect StarWind iSCSI devices to the servers and further provide HA storage to the cluster (i.e. CSV).
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
Libscribe
In order to read/write from/to Scale Computing HC3 block devices (aka Virtual SCRIBE Devices or VSD for short) the Libscribe component needs to be installed in KVM on each physical host. Libscribe is part of the QEMU process and presents virtual block devices to the VM. Because Libscribe is a QEMU block driver, SCRIBE is a supported device type and qemu-img commands work by default.
Although a virtIO driver doesnt need to be installed perse in each VM, it is highly recommended as I/O performance benefits greatly from it. IO submission takes place via the Linux Native Asynchronous I/O (AIO) that is present in KVM.
Shared storage devices in virtual Windows Clusters are supported.
QEMU = Quick Emulator
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
Microsoft Windows Server 2012/2012R2/2016/2019
StarWind HCA provides highly available storage over iSCSI between the StarWind nodes and can additionally share storage over iSCSI to any OS that supports the iSCSI protocol.
|
N/A
Scale Computing HC3 does not support any non-hypervisor platforms.
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
iSCSI
|
N/A
Scale Computing HC3 does not support any non-hypervisor platforms.
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
N/A
StarWind HCA relies on the container support delivered by the hypervisor platform.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
The StarWind HA VMFS (datastore) can be used for deploying containers just as on common VMFS datastore.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop
Although StarWind supports both VMware and Citrix VDI deployments on top of StarWind HCA, there is currently no specific documentation available.
|
Citrix XenDesktop
Parallels RAS
Leostream
Scale Computing HC3 HyperCore is a Citrix Ready platform. XenDesktop 7.6 LTSR, 7.8 and 7.9 are officially supported.
Scale Computing HC3 also actively supports the following desktop virtualization software:
- Parallels Remote Application Server (RAS);
- Leostream (=connection management).
A joint Reference Configuration white paper for Parallels RAS on Scale Computing HC3 was published in June 2019.
A joint Quick Start with Scale Computing HC3 and Leostream white paper was released in March 2019.
Since Scale Computing HC3 does not support the VMware vSphere hypervisor, VMware Horizon is not an option.
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 260 virtual desktops/node
Citrix: up to 220 virtual desktops/node
The load bearing numbers are based on approximate calculations of VDI infrastructure that StarWind HCA can support.
There are no LoginVSA benchmark numbers on record for StarWind HCA as of yet.
|
Workspot: 40 virtual desktops/node
Workspot VDI 2.0: Load bearing number is based on Login VSI tests performed on hybrid HC2150 appliances using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding whitepaper. Please note that this technical whitepaper is dated August 2016 and that Workspot VDI 2.0 no longer exists. Workspots current portfolio only includes cloud solutions that run in Microsoft Azure.
Scale Computing has not published any Reference Architecture whitepapers for the Citrix XenDesktop platform.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Super Micro (StarWind branded)
Dell (StarWind branded)
Dell (OEM)
HPE (Select)
All StarWind HCA models can be delivered as either Dell or Supermicro server hardware. Dell and Supermicro rackmount chassis hardware models (1U-2U) are further selected as to best fit the requirements of the specific end-user organization.
|
Lenovo (native and OEM)
SuperMicro (native)
Scale Computing leverages both Lenovo and SuperMicro server hardware as building blocks for is native HC3 appliances:
HC1200 is Supermicro server hardware
HC1250 is Supermicro server hardware
HC1250D is Lenovo server hardware
HC1250DF is Lenovo server hardware
HC5250D is Lenovo server hardware
Scale Computing has maintained a partnership with MBX Systems since 2012. MBX Systems is a hardware integrator based in the US, with headquarters both in Chicago and San Jose, that is tasked with assembling the native HC3 appliances.
In May 2018 Scale Computing and Lenovo entered in an OEM partnership to provide Scale Computing HC3 software on Lenovo ThinkSystem tower (ST250) or rack servers (SR630, SR650, SR250) with a wide variety of hardware choices (eg. CPU and RAM).
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
6 Native Models (L-AF, XL-AF, L-H, XL-H, L, XL)
20 Native Submodels
All StarWind HCA models are designed for small and mid-size SMB virtualized environments.
StarWind Native Models (Super Micro / Dell):
L-AF (All-Flash; Performance)
XL-AF (All-Flash; Performance @ scale)
L-H (Hybrid; Cost-efficient performance)
XL-H (Hybrid; Cost-efficient performance @ scale)
L (Magnetic-only; Capacity)
XL (Magnetic-only; Capacity @ scale)
|
4 Native Models
NEW
There are 4 native model series to choose from:
HE100 Edge Computing/Remote offices, stores, warehouses, labs, classrooms, ships
HE500 Edge Computing/Small remote sites/DR
HC1200 SMB/Midmarket
HC5000 Enterprise/Distributed Enterprise
There are 4 Lenovo model series to choose from:
ST250 Edge, Backup
SR250 Edge
SR630 Mid-market
SR650 Mid-market, High Capacity
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1 node per chassis
The StarWind HCA architecture uses 1U-1Node and 2U-1Node building blocks.
|
1 node per chassis
NEW
Scale Computing HE100 appliances are Intel NUCs.
Scale Computing HE500 appliances are either 1U building blocks or Towers.
Scale Computing HC1200 appliances are 1U building blocks.
Scale Computing HC5000 appliances are 2U building blocks.
Lenovo HC3 Edge ST250 appliances are Towers.
Lenovo HC3 Edge SR250 appliances are 1U building blocks.
Lenovo HC3 Edge SR630 appliances are 1U building blocks.
Lenovo HC3 Edge SR650 appliances are 2U building blocks.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
NUC = Next Unit of Computing
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
No
StarWind HCAs cannot be mixed and should be of the same configuration when used in a cluster as they come in a High-Availability (HA) pair as a single solution.
|
Yes
Scale Computing allows for mixing different server hardware in a single HC3 cluster, including nodes from different generations.
|
|
|
|
Components |
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
StarWind HCAs have flexible CPU configurations. StarWind can equip any Intel CPU as per customers specific requirements. Only the default CPU configurations are shown below.
StarWind HCA model L-AF (All-Flash) CPU specs:
L-AF 2.8: 2x Intel Xeon Silver 4208 (8 cores)
L-AF 4.8: 2x Intel Xeon Silver 4208 (8 cores)
L-AF 7.6: 2x Intel Xeon Silver 4208 (8 cores)
L-AF 9.6: 2x Intel Xeon Silver 4208 (8 cores)
StarWind HCA model XL-AF (All-Flash) CPU specs:
XL-AF 13.4: 2x Intel Xeon Gold 5218 (16 cores)
XL-AF 15.3: 2x Intel Xeon Gold 5218 (16 cores)
XL-AF 19.2: 2x Intel Xeon Gold 5218 (16 cores)
XL-AF 23.0: 2x Intel Xeon Gold 5218 (16 cores)
StarWind HCA model L-H (Hybrid) CPU specs:
L-H 5.9: 2x Intel Xeon Silver 4208 (8 cores)
StarWind HCA model XL-H (Hybrid) CPU specs:
XL-H 8.8: 2x Intel Xeon Silver 4208 (8 cores)
XL-H 11.8: 2x Intel Xeon Silver 4208 (8 cores)
XL-H 16.8: 2x Intel Xeon Gold 5218 (16 cores)
XL-H 21.7: 2x Intel Xeon Silver 5218 (16 cores)
XL-H 37.6: 2x Intel Xeon Gold 5218 (16 cores)
StarWind HCA model L (Magnetic-only) CPU specs:
L-4D: 2x Intel Xeon Bronze 3204 (6 cores)
StarWind HCA model XL (Magnetic-only) CPU specs:
XL-8D: 2x Intel Xeon Silver 4208 (8 cores)
XL-16D: 2x Intel Xeon Silver 4208 (8 cores)
XL-24D: 2x Intel Xeon Silver 4208 (8 cores)
XL-32D: 2x Intel Xeon Silver 4208 (8 cores)
XL-48D: 2x Intel Xeon Silver 4208 (8 cores)
StarWind HCA nodes are equiped with 2nd generation Intel Xeon Scalable (Cascade Lake) processors by default.
|
Flexible: up to 3 options (native); extensive (Lenovo OEM)
Scale Computing HE100-series CPU options:
HE150: 1x Intel i3-10110U (2 cores); 1x Intel i5-10210U (4 cores); 1x i7-10710U (6 cores)
Scale Computing HE500-series CPU options:
HE500: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550F: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE500T: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550TF: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
Scale Computing HC1200-series CPU options:
HC1200: 1x Intel Xeon Bronze 3204 (6 cores); 1x Intel Xeon Silver 4208 (8 cores)
HC1250: 1x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6242 (16 cores)
HC1250D: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores); 2x Intel Xeon Gold 6244 (8 cores)
HC1250DF: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores); 2x Intel Xeon Gold 6244 (8 cores)
Scale Computing HC5000-series CPU options:
HC5200: 1x Intel Xeon Silver 4208 (8 cores); 1x Intel Xeon Silver 4210 (10 cores); 1x Intel Xeon Gold 6230 (20 cores)
HC5250D: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores)
Scale Computing HC1200 and HC5000 series nodes ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
Lenovo HC3 Edge CPU options:
ST250: 1x Intel Xeon E-2100
SR250: 1x Intel Xeon E-2100
SR630: 2x 1st or 2nd generation Intel Xeon Scalable (Skylake or Cascade Lake)
SR650: 2x 1st or 2nd generation Intel Xeon Scalable (Skylake or Cascade Lake)
|
|
|
Flexible
|
Flexible
StarWind HCA model L-AF (All-Flash) Memory specs:
L-AF 2.8: 64GB
L-AF 4.8: 128GB
L-AF 7.6: 128GB
L-AF 9.6: 128GB
StarWind HCA model XL-AF (All-Flash) Memory specs:
XL-AF 13.4: 192GB
XL-AF 15.3: 256GB
XL-AF 19.2: 320GB
XL-AF 23.0: 512GB
StarWind HCA model L-H (Hybrid) Memory specs:
L-H 5.9: 96GB
StarWind HCA model XL-H (Hybrid) Memory specs:
XL-H 8.8: 128GB
XL-H 11.8: 128GB
XL-H 16.8: 256GB
XL-H 21.7: 256GB
XL-H 37.6: 512GB
StarWind HCA model L (Magnetic-only) Memory specs:
L-4D: 32GB
StarWind HCA model XL (Magnetic-only) Memory specs:
XL-8D: 64GB
XL-16D: 96GB
XL-24D: 128GB
XL-32D: 128GB
XL-48D: 192GB
Memory can be expanded on all StarWind HCA models and sub-models.
|
Flexible: up to 8 options
Scale Computing HE100-series memory options:
HE150: 8GB, 16GB, 32GB, 64GB
Scale Computing HE500-series memory options:
HE500: 16GB, 32GB, 64GB
HE550: 16GB, 32GB, 64GB
HE550F: 16GB, 32GB, 64GB
HE500T: 16GB, 32GB, 64GB
HE500TF: 16GB, 32GB, 64GB
Scale Computing HC1200-series memory options:
HC1200: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB
HC1250: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB
HC1250D: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
HC1250DF: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
Scale Computing HC5000-series memory options:
HC5200: 64GB, 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
HC5250D: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB, 1TB, 1.5TB
Lenovo HC3 Edge series memory options:
ST250: 16GB - 64GB
SR250: 16GB - 64GB
SR630: 64GB - 768GB
SR650: 64GB - 1.5TB
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: number of storage devices
StarWind HCA model L-AF (All-Flash) Storage specs:
L-AF 2.8: 4x 960GB SATA Mix Use SSD (RAID5)
L-AF 4.8: 6x 960GB SATA Mix Use SSD (RAID5)
L-AF 7.6: 5x 1.92TB SATA Mix Use SSD (RAID5)
L-AF 9.6: 6x 1.92TB SATA Mix Use SSD (RAID5)
StarWind HCA model XL-AF (All-Flash) Storage specs:
XL-AF 13.4: 8x 1.92TB SATA Mix Use SSD (RAID5)
XL-AF 15.3: 9x 1.92TB SATA Mix Use SSD (RAID5)
XL-AF 19.2: 11x 1.92TB SATA Mix Use SSD (RAID5)
XL-AF 23.0: 13x 1.92TB SATA Mix Use SSD (RAID5)
StarWind HCA model L-H (Hybrid) Storage specs:
L-H 5.9: 2x 1.92TB SATA Mix Use SSD (RAID1) + 2x 4TB 7.2K NL-SAS (RAID1)
StarWind HCA model XL-H (Hybrid) Storage specs:
XL-H 8.8: 4x 960GB SATA Mix Use SSD (RAID5) + 6x 2TB 7.2K NL-SAS (RAID10)
XL-H 11.8: 5x 960GB SATA Mix Use SSD (RAID5) + 4x 4TB 7.2K NL-SAS (RAID10)
XL-H 16.8: 6x 960GB SATA Mix Use SSD (RAID5) + 6x 4TB 7.2K NL-SAS (RAID10)
XL-H 21.7: 4x 1.92TB SATA Mix Use SSD (RAID5) + 8x 4TB 7.2K NL-SAS (RAID10)
XL-H 37.6: 6x 1.92TB SATA Mix Use SSD (RAID5) + 14x 4TB 7.2K NL-SAS (RAID10)
StarWind HCA model L (Magnetic-only) Storage specs:
L-4D: 4x 2TB 7.2K NL-SAS (RAID10)
StarWind HCA model XL (Magnetic-only) Storage specs:
XL-8D: 8x 2TB 7.2K NL-SAS (RAID10)
XL-16D: 8x 4TB 7.2K NL-SAS (RAID10)
XL-24D: 12x 4TB 7.2K NL-SAS (RAID10)
XL-32D: 8x 8TB 7.2K NL-SAS (RAID10)
XL-48D: 12x 8TB 7.2K NL-SAS (RAID10)
Storage can be expanded on the following StarWind HCA models:
- All L-AF and XL-AF submodels
- XL-H 8.8, XL-H 11.8, XL-H 16.8, XL-H 21.7, XL-H 37.6
- XL-8D, XL-16D and XL-32D
|
Capacity: up to 5 options (HDD, SSD)
Fixed: Number of disks
Scale Computing HE100-series storage options:
HE150: 1x 250GB/500GB/1TB/2TB M.2 NVMe
Scale Computing HE500-series storage options:
HE500: 4x 1/2/4/8TB NL-SAS [magnetic-only]
HE550: 1x 480GB/960GB SSD + 3x 1/2/4TB NL-SAS [hybrid]
HE550F: 4x 240GB/480GB/960GB SSD [all-flash]
HE500T: 4x 1/2/4/8TB NL-SAS + 8x 4/8TB NL-SAS [magnetic-only]
HE550TF: 4x 240GB/480GB/960GB SSD [all-flash]
Scale Computing HC1200-series storage options:
HC1200: 4x 1/2/4/8/12TB NL-SAS [magnetic-only]
HC1250: 1x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8/12TB NL-SAS [hybrid]
HC1250D: 1x 960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
HC1250DF: 4x 960GB/1.92TB/3.84TB/7.68TB SSD [all-flash]
Scale Computing HC5000-series storage options:
HC5200: 12x 8/12TB NL-SAS [magnetic-only]
HC5250D: 3x 960GB/1.92TB/3.84TB/7.68TB SSD + 9x 4/8TB NL-SAS [hybrid]
Lenovo HC3 Edge series storage options:
ST250: 8x 1/2/4/8TB NL-SAS [magnetic only]
ST250: 4x 960GB/1.92TB/3.84TB SSD [all-flash]
SR250: 4x 1/2/4/8TB NL-SAS [magnetic only]
SR250: 1x 960GB/1.92TB/3.84TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
SR250: 4x 960GB/1.92TB/3.84TB SSD [all-flash]
SR630: 4x 1/2/4/8TB NL-SAS [magnetic only]
SR630: 1x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
SR630: 4x 1.92TB/3.84TB/7.68TB SSD [all-flash]
SR650: 3x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 9x 1/2/4/8TB NL-SAS [hybrid]
The SSDs in all mentioned nodes are normal SSDs (non-NMVe).
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Fixed (Private network)
StarWind HCA model L-AF (All-Flash) Networking specs:
L-AF 2.8: 4x 1GbE and 2x 10GbE (Private, RDMA-enabled)
L-AF 4.8: 4x 1GbE and 2x 10GbE (Private, RDMA-enabled)
L-AF 7.6: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
L-AF 9.6: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
StarWind HCA model XL-AF (All-Flash) Networking specs:
XL-AF 13.4: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-AF 15.3: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-AF 19.2: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-AF 23.0: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
StarWind HCA model L-H (Hybrid) Networking specs:
L-H 5.9: 2x 1GbE + 2x 10GbE NDC
StarWind HCA model XL-H (Hybrid) Networking specs:
XL-H 8.8: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-H 11.8: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-H 16.8: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-H 21.7: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-H 37.6: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
StarWind HCA model L (Magnetic-only) Networking specs:
L-4D: 2x 1GbE (LOM) and 4x 1GbE
StarWind HCA model XL (Magnetic-only) Networking specs:
XL-8D: 4x 1GbE and 2x 10GbE (Private)
XL-16D: 4x 1GbE and 2x 10GbE (Private)
XL-24D: 4x 1GbE and 2x 10GbE (Private)
XL-32D: 4x 1GbE and 2x 10GbE (Private)
XL-48D: 4x 1GbE and 2x 10GbE (Private)
Private networks that are used for StarWind remain fixed. Other NICs can be replaced as per customer-specific requirements.
|
Fixed: HC1200/5000: 10GbE; HE150/500T: 1GbE
Flexible: HE500: 1/10GbE
Scale Computing HE100-series networking options:
HE150: 1x 1GbE
Scale Computing HE500-series networking options:
HE500: 4x 1GbE or 4x 10GbE SFP+
HE550: 4x 1GbE or 4x 10GbE SFP+
HE550F: 4x 1GbE or 4x 10GbE SFP+
HE500T: 2x 1GbE
HE550TF: 2x 1GbE
Scale Computing HC1200-series networking options:
HC1200: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250D: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250DF: 4x 10GbE Base-T/SFP+ bonded active/passive
Scale Computing HC5000-series networking options:
HC5200: 4x 10GbE Base-T/SFP+ bonded active/passive
HC5250D:4x 10GbE Base-T/SFP+ bonded active/passive
Lenovo HC3 Edge series networking options:
ST250: 2x 1GbE
SR250: 4x 1GbE or 4x 10GbE SFP+
SR630: 4x 10GbE BaseT or 4x 10GbE SFP+
SR650: 4x 10GbE BaseT or 4x 10GbE SFP+
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla/Quadro
GPUs can be added to custom StarWind HCA models depending on the requirements.
The following NVIDIA GPU card configurations can be ordered along with custom StarWind HCA models:
NVIDIA Tesla P4
NVIDIA Quadro P4000
|
N/A
Scale Computing HC3 currently does not provide any GPUs options.
|
|
|
|
Scaling |
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
CPU
Memory
Storage
GPU
Storage: All available disk slots in a StarWind HCA node can be utilized. Additionally, the StarWind HCA technical architecture allows scaling using external disk shelves (JBODs).
|
CPU
Memory
The Scale Computing HC3 platform allows for expanding CPU and Memory hardware resources. Storage resources (the number of disks within a single node) are usually not expanded.
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Compute+storage
Compute-only
Storage-only
In case of Storage-only scale out, StarWind HCA Storage Nodes will be based on Windows Server bare metal with no hypervisor software installed eg. VMware ESXi. StarWind software will be running as a Windows-native application and providing storage to the hypervisor hosts.
|
Storage+Compute
Storage-only
Storage+Compute: Existing Scale Computing HC3 clusters can be expanded by adding additional nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: N/A; A Scale Computing HC3 node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
Storage-only: A Scale Computing HC3 node can be configured as a storage-only node by setting a flag and has to be performed by Scale Computing engineering (end-user organizations cannot set the flag themselves).
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
2-64 nodes in 1-node increments
The 64-node limit applies to both VMware vSphere and Microsoft Hyper-V environments.
|
3-8 nodes in 1-node increments
There is a maximum of 8 nodes within a single cluster. Larger clusters do exist, but must be requested and are evaluated on a per use-case basis.
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
2 Node minimum
StarWind HCAs or Storage-only HCAs come as a 2-node minimum. They can further scale up by adding more disks or JBODs or scale-out by adding new HCAs or Storage-only nodes.
|
1 Node minimum
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Block Storage Pool
StarWind HCA only serves block devices as storage volumes to the supported OS platforms.
The underlying storage is first aggregated with hardware RAID inside StarWind HCA. Then, the storage is replicated by StarWind at the block level across 2 or 3 nodes and further provided as a single pool (single StarWind virtual device) or as multiple pools (multiple StarWind devices).
|
Block Storage Pool
Scale Computing HC3 only serves block devices to the supported OS guest platforms. VMs running on HC3 have direct, block-level access to virtual SCRIBE devices (VSDs, aka virtual disks) in the clustered storage pool without the complexity or performance overhead introduced by using remote storage protocols.
A critical software component of HyperCore is the Scale Computing Reliable Independent Block Engine, known as
SCRIBE. SCRIBE is an enterprise class, clustered block storage layer, purpose built to be consumed by the HC3 embedded KVM based hypervisor directly.
SCRIBE discovers and aggregates all block storage devices across all nodes of the system into a single managed pool of storage. All data written to this pool is immediately available for read and write by any and every node in the storage system, allowing for sophisticated data redundancy, data deduplication, and load balancing schemes to be used by higher layers of the stack—such as the HyperCore
compute layer.
SCRIBE is a wide-striped storage architecture that combines all disks in the cluster into a single storage pool that is tiered between flash SSD and spinning HDD storage.
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
StarWinds core approach is to keep data close (local) to the VM in order to avoid slow data transfers through the network and achieve the highest performance the setup can provide. The solution is designed to store the first instance of all data on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference. Data does not automatically follow the VM when the VM is moved to another node.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
None
Scale Computing HC3 is based on a shared nothing storage architecture. Scale Computing HC3 enables every drive in every node throughout the cluster to contribute to the storage performance and capacity of every virtual disk (VDS) presented by the SCRIBE storage layer. When a VM is moved to another node, data remains in place and does not follow the VM because data is stored and available across all nodes residing in the cluster.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (Raw)
SAN or NAS
Direct-attached: StarWind can take control of formatted disks (NTFS). Also, StarWind software can present RAW unformatted disks over SCSI Pass Through Interface that enables remote initiator clients to use any type of a hard drive (PATA/SATA/RAID).
External SAN/NAS Storage: SAN/NAS can be connected over Ethernet connectivity and can be used for StarWind HCA as soon as they are provided as block storage (iSCSI). In case it is required to replicate data between NAS systems, there should be 2 NAS systems connected to both StarWind HCA nodes.
|
Direct-Attached (Raw)
Direct-attached: The software takes ownership of the unformatted physical disks available each host.
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
|
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
Scale Computing HC3 appliance models storage composition:
HC1200: Magnetic-only
HC1250: Hybrid
HC1250D: Hybrid
HC1250DF: All-flash
HC5250D: Hybrid
A Magnetic-only node is called a Non-tiered node and contains 100% HDD drives and no SSD drives.
A Hybrid node is called a Tiered node and contains 25% SSD drives and 75% HDD drives.
An All-Flash node contains 100% SSD drives and no HDD drives.
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SD, USB, DOM, SSD/HDD
By default, all StarWind HCA nodes come with redundant SSDs or M.2 sticks for OS installation.
|
HDD or SSD (partition)
By default for each 1TB of data, 8MB is allocated for metadata. The data and metadata is stored on the physical storage devices (RSDs) and both are protected using mirroring (2N). Because metadata is this lightweight, all of the metadata of all of the online VSDs is cached in DRAM.
VSD = Virtual SCRIBE Device
RSD = Real SCRIBE Device
|
|
|
|
Memory |
|
|
|
DRAM
|
DRAM
|
DRAM
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
Read/Write Cache
StarWind HCA accelerates reads and writes by leveraging conventional RAM.
The memory cache is filled up with data mainly during write operations. During read operations, data enters the cache only if the latter contains either empty memory blocks or the lines that were allocated for these entries earlier and have not been fully exhausted yet.
StarWind HCA supports two Memory (L1 Cache) Policies:
1. Write-Back, caches writes in DRAM only and acknowledges back to the originator when complete in DRAM.
2. Write-Through, caches writes in both DRAM and underlying storage, and acknowledges back to the originator when complete in the underlying storage.
This means that exclusively caching writes in convential memory is optional. When the Write-Through policy is used, DRAM is used primarily for caching reads.
To change the cache size, first, the StarWind service should be stopped on one HCA and change cache and then start the service and then repeat the same process on the partner HCA. This allows keeping VMs up and running during cache changes.
In the majority of use cases, there is no need to assign L1 cache for all-flash storage arrays.
Note: In case of using the Write-Back policy for DRAM, UPS units have to be installed to ensure the correct shutdown of StarWind Virtual SAN nodes. If a power outage occurs, this will prevent the loss of cached data. The UPS capacity must cover the time required for flushing the cached data to the underlying storage.
|
Read Cache
Metadata structures
By default for each 1TB of data, 8MB is allocated for metadata. The data and metadata is stored on the physical storage devices (RSDs) and both are protected using mirroring (2N). Because metadata is this lightweight, all of the metadata of all of the online VSDs is cached in DRAM.
VSD = Virtual SCRIBE Device
RSD = Real SCRIBE Device
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
Configurable
The size of L1 cache should be equal to the amount of the average working data set.
There are no default or maximum values for RAM cache as such. The maximum size that can be assigned for StarWind RAM cache is limited by the available RAM to the system; also, you need to make sure that other applications running will have enough of RAM for their operations.
Additionally, the total amount of L1 cache assigned influences the time required for system shutdown so overprovisioning of the L1 cache amount can cause StarWind service interruption and the loss of cached data. The minimum size assigned for RAM cache in either write-back or write-through mode is 1MB. However, StarWind recommends assigning a StarWind RAM cache size that matches the size of the working data set.
|
4GB+
4GB of RAM is reserved per node for the entire HC3 system to function. No specific RAM is reserved for caching but the system will use any available memory as needed for caching purposes.
|
|
|
|
Flash |
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, NVMe
|
SSD, NVMe
HyperCore-Direct for NVMe can be requested and is evaluated by Scale Computing on a per-customer scenario basis.
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
Read/Write Cache (hybrid)
Persistent storage (all-flash)
StarWind HCA supports a single Flash (L2 Cache) Policy:
1. Write-Through, caches writes in both Flash (SSD/NVMe) and the underlying storage, and acknowledges back to the originator when complete in the underlying storage.
With the write-through policy, new blocks are written both to cache layer and the underlying storage synchronously. However, in this mode, only the blocks that are read most frequently are kept in the cache layer, accelerating read operations. This is the only mode available for StarWind L2 cache.
In the case of Write-Through cached data does not need to be offloaded to the backing store when a device is removed or the service is stopped.
In the majority of use cases, if L1 cache is already assigned, there is no need to configure L2 cache.
A StarWind HCA solution with L2 cache configured should have more RAM available, since L2 cache needs 6.5 GB of RAM per 1 TB of L2 cache to store metadata.
|
Persistent Storage
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
All-Flash: 4-20+ devices per node
Hybrid: 2 devices per node
All available disk slots in a StarWind HCA node can be utilized. Additionally, the StarWind HCA technical architecture allows scaling using external disk shelves (JBODs).
|
Hybrid: 1-3 SSDs per node
All-Flash: 4 SSDs per node
Flash devices are not mandatory in a Scale Computing HC3 solution.
Each HC1200 hybrid node has 1 SSD drive attached.
Each HC1250 all-flash node has 4 SSD drives attached.
Each HC5250 node has 3 SSD drives attached.
An HC1250 all-flash node can have a maximum of 15.36TB of raw SSD storage attached.
|
|
|
|
Magnetic |
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
Hybrid: SAS or SATA
|
Hybrid: SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
Persistent Storage
|
Persistent Storage
|
Persistent Storage
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
Hybrid: 4+ devices per node
Magnetic-only: 4-12+ devices per node
All available disk slots in a StarWind HCA node can be utilized. Additionally, the StarWind HCA technical architecture allows scaling using external disk shelves (JBODs).
|
Magnetic-only: 4 HDDs per node
Hybrid: 3 or 9 HDDs per node
Magnetic devices are not mandatory in a Scale Computing HC3 solution.
Each HC1200 magnetic-only node has 4 HDD drives attached.
Each HC1250 hybrid node has 3 HDD drives attached.
Each HC5250 hybrid node has 9 HDD drives attached.
An HC1200 magnetic-only node can have a maximum of 32TB of raw HDD storage attached.
An HC1250 hybrid node can have a maximum of 24TB of raw HDD storage attached.
An HC5250 hybrid node can have a maximum of 72TB of raw HDD storage attached.
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
DRAM (mirrored)
Flash Layer (SSD, NVMe)
|
Flash/HDD
The persisent write buffer depends on the type of the block storage pool (Flash or HDD).
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N)
+ Hardware RAID (1, 5, 10)
StarWind HCA uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster. When a physical disk fails, hardware RAID maintains data availability.
The hardware RAID level that is applied depends on the media type and the number of devices used:
Flash drives = RAID5
2 Flash drives in Hybrid config = RAID1
Magnetic drives = RAID10
2 Magnetic drives in Hybrid config = RAID1
|
2-way Mirroring (Network RAID-10)
Within a Scale Computing HC cluster all data is written twice to the block storage pool for redundancy (2N). It is equivalent to Network RAID-10, as the two data chunks are placed on separate physical disks of separate physical nodes within the cluster. This protects against 1 disk failure and 1 node failure at the same time, and aggregates the I/O and throughput capabilities of all the individual disks in the cluster (= wide striping).
Once an RSD fails, the system re-mirrors the data using the free space in the HC3 cluster as a hot spare. Because all physical disks contain data, rebuilds are very fast. Scale Computing HC3 is often to detect the deteriorated state of a physical storage device in advance and pro-actively copy data to other devices ahead of an actual failure.
Currently only 1 Replica (2N) can be maintained, as the setting is not configurable for end-users.
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N)
Replicas: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster.
|
2-way Mirroring (Network RAID-10)
Within a Scale Computing HC cluster all data is written twice to the block storage pool for redundancy (2N). It is equivalent to Network RAID-10, as the two data chunks are placed on separate physical disks of separate physical nodes within the cluster. This protects against 1 disk failure and 1 node failure at the same time, and aggregates the I/O and throughput capabilities of all the individual disks in the cluster (= wide striping).
Once an RSD fails, the system re-mirrors the data using the free space in the HC3 cluster as a hot spare. Because all physical disks contain data, rebuilds are very fast. Scale Computing HC3 is often to detect the deteriorated state of a physical storage device in advance and pro-actively copy data to other devices ahead of an actual failure.
Currently only 1 Replica (2N) can be maintained, as the setting is not configurable for end-users.
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Not relevant (1-node chassis only)
|
Not relevant (1U/2U appliances)
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
N/A
|
N/A
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
Replica (2N) + RAID1/10: 200%
Replica (2N) + RAID5: 125-133%
Replica (3N) + RAID1/10: 300%
Replica (3N) + RAID5: 225-233%
|
Mirroring (2N) (primary): 100%
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
N/A (hardware dependent)
StarWind HCA fully relies on the hardware layer to protect data integrity. This means that the StarWind software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks (software)
Disk scrubbing (software)
The HC3 system performs continuous read integrity checks on data blocks to detect corruption errors. As blocks are written to disk, replica blocks are written to other disks within the storage pool for redundancy. Disk are continuously scrubbed in the background for errors and any corruption found is repaired from the replica data blocks.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
|
N/A
StarWind HCA does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Built-in (native)
HyperCore snapshots use a space efficient allocate-on-write methodology where no additional storage is used at the time the snapshot is taken, but as blocks are changed the original content blocks are preserved, and new content written to freshly allocated space on the cluster.
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
N/A
StarWind HCA does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Local (+ Remote)
Manual snapshots are always created on the source HC3 cluster only and are never deleted by the system.
Without remote replication active on a VM, snapshots created using snapshot schedules are also created on the source HC3 cluster only.
With remote replication active, a snapshot schedule repeatedly creates a VM snapshot on the source cluster and then copies that snapshot to the target cluster, where it is retained for a specified number of minutes/hours/days/weeks/months. The default remote replication frequency of 5 minutes, combined with the default retention of 25 minutes, means that by default 5 snapshots are maintained on the target HC3 cluster at any given time.
A VM can only have one snapshot schedule assigned at a time. However, a schedule can contain multiple recurrence rules. Each recurrence rule consists of a replication snapshot frequency (x minutes/hours/days/weeks/months), an execution time (eg. 12:00AM), and a retention (y minutes/hours/days/weeks).
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
N/A
StarWind HCA does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
5 minutes
A snapshot schedule allows a minimum frequency of 5 minutes. However, ScaleCare Support recommends no less than every 15 minutes as a general best practice.
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
N/A
StarWind HCA does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Per VM
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
External
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
Built-in (native)
By combining Scale Computing HC3s native snapshot feature with its native remote replication mechanism, backup copies can be created on remote HC3 clusters.
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
Apart from the native features, Scale Computing HC3 supports any in-guest 3rd party backup agents that are designed to run on Intel-based virtual machines on our supported OS platforms.
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
Locally
To remote sites
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
5 minutes (Asynchronous)
VM snapshots are created automatically by the replication process as quickly as every 5 minutes (as long as the previous snapshot’s change blocks have been fully replicated to the target HC3 cluster). The remote replication default schedule will take a snapshot every 5 minutes and keep snapshots for 25 minutes.
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
For Windows VMs that require it, VSS snapshot integration is provided in the VIRTIO driver package.
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
Entire VM
Although Scale Computing HC3 uses block-storage, the platform is capable of attaining per VM-granularity.
|
|
Restore Ease-of-use
Details
|
Entire VM or Volume: GUI
Single File: Multi-step
Restoring VMs or single files from volume-based storage snapshots requires a multi-step approach.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasab | |