|
General
|
Score:N/A - Features:6
- Green(Full Support):0
- Amber(Partial):0
- Red(Not support):0
- Gray(FYI only):6
|
Score:N/A - Features:6
- Green(Full Support):0
- Amber(Partial):0
- Red(Not support):0
- Gray(FYI only):6
|
Score:N/A - Features:6
- Green(Full Support):0
- Amber(Partial):0
- Red(Not support):0
- Gray(FYI only):6
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Fast streamlined deployment
- + Strong VMware integration
- + Policy-based management
|
- + Extensive data protection capabilities
- + Policy-based management
- + Fast streamlined deployment
|
- + Built for simplicity
- + Policy-based management
- + Cost-effectiveness
|
|
Cons
|
- - Single hypervisor and server hardware
- - No bare-metal support
- - Very limited native data protection capabilities
|
- - Single hypervisor and server hardware
- - No bare-metal support
- - No hybrid configurations
|
- - Single hypervisor support
- - No stretched clustering
- - No native file services
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: VxRail
Type: Hardware+Software (HCI)
Development Start: 2015
First Product Release: feb 2016
VCE was founded late 2009 by VMware, Cisco and EMC. The company is best known for its converged solutions VBlock and VxBlock. VCE started to ship its first hyper-converged solution, VxRack, late 2015, as part of the VMware EVO:RAIL program. In February 2016 VCE launced VxRail on Quanta server hardware. After completion of the Dell/EMC merger however, VxRail became part of the Dell EMC portfolio and the company switched to Dell server hardware.
VMware was founded early 2002 and began to ship its first Software Defined Storage solution, Virtual SAN (vSAN), in 2014. The vSAN solution is fully integrated into the vSphere Hypervisor platform. In 2015 VMware released major updates in the second itiration of the product and continues to improve the software ever since.
In August 2017 VxRail had an install base of approximately 3,000 customers worldwide.
At the end of May 2019 the company had a customer install base of more than 20,000 vSAN customers worldwide. This covers both vSAN and VxRail customers. At the end of May 2019 there were over 30,000 employees working for VMware worldwide.
|
Name: HPE SimpliVity 2600
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2018
SimpliVity was founded late 2009 and began to ship its first Hyper Converged Infrastructure (HCI) solution, OmniStack (OmniStack), in 2013. The core of the SimpliVity solution is the OmniStack OS combined with OmniStack Accellerator PCIe card. In January 2017 SimpliVity was acquired by HPE. In the second quarter of 2017 HPE introduced SimpliVity on HPE ProLiant server hardware and the platform was rebranded to HPE SimpliVity 380. In July 2018 HPE extended the HPE SimpliVity product family with HPE SimpliVity 2600 featuring software-only deduplication and compression.
In January 2018 HPE SimpliVity had a customer install base of approximately 2,000 customers worldwide. The number of employees working in the HPE SimpliVity division is unknown at this time.
|
Name: Hyperconvergence (HC3)
Type: Hardware+Software (HCI)
Development Start: 2011
First Product Release: 2012
Scale Computing was founded in 2007 and began to ship its first SAN/NAS scale-out storage product, in 2009. Mid 2011 development started on the Hyperconvergence (HC3) platform, which was to combine the 3 foundation layers, being compute, storage and virtualization, into a single hardware appliance. HC3 was built to provide ultra simple ease-of-use and initially targeted at the SMB market. The first HC3 models were released in August 2012.
In Januari 2019 the company had an install base of more than 3,500 customers worldwide. In January 2019 there were 130+ employees working for Scale Computing.
|
|
|
GA Release Dates:
VxRail 7.0.100 (vSAN 7.0 U1: nov 2020
VxRail 7.0 (vSAN 7.0): apr 2020
VxRail 4.7.410 (vSAN 6.7.3): dec 2019
VxRail 4.7.300 (vSAN 6.7.3): sep 2019
VxRail 4.7.212 (vSAN 6.7.2): jul 2019
VxRail 4.7.200 (vSAN 6.7.2): may 2019
VxRail 4.7.100 (vSAN 6.7.1): mar 2019
VxRail 4.7.001 (vSAN 6.7.1): dec 2018
VxRail 4.7.000 (vSAN 6.7.1): nov 2018
VxRail 4.5.225 (vSAN 6.6.1): oct 2018
VxRail 4.5.218 (vSAN 6.6.1): aug 2018
VxRail 4.5.210 (vSAN 6.6.1): may 2018
VxRail 4.5 (vSAN 6.6): sep 2017
VxRail 4.0 (vSAN 6.2): dec 2016
VxRail 3.5 (vSAN 6.2): jun 2016
VxRail 3.0 (vSAN 6.1): feb 2016
NEW
7th Generation VMware software on 14th Generation Dell server hardware.
VxRail is fueled by vSAN software. vSANs maturity has been increasing ever since the first iteration by expanding its range of features with a set of advanced functionality.
|
GA Release Dates:
OmniStack 4.0.1 U1: dec 2020
OmniStack 4.0.1: apr 2020
OmniStack 4.0.0: jan 2020
OmniStack 3.7.10: sep 2019
OmniStack 3.7.9: jun 2019
OmniStack 3.7.8: mar 2019
OmniStack 3.7.7: dec 2018
OmniStack 3.7.6 U1: oct 2018
OmniStack 3.7.6: sep 2018
OmniStack 3.7.5: jul 2018
OmniStack 3.7.4: may 2018
OmniStack 3.7.3: mar 2018
OmniStack 3.7.2: dec 2017
OmniStack 3.7.1: oct 2017
OmniStack 3.7.0: jun 2017
OmniStack 3.6.2: mar 2017
OmniStack 3.6.1: jan 2017
OmniStack 3.5.3: nov 2016
OmniStack 3.5.2: juli 2016
OmniStack 3.5.1: may 2016
OmniStack 3.0.7: aug 2015
OmniStack 2.1.0: jan 2014
OmniStack 1.1.0: aug 2013
NEW
4th Generation software on 9th and 10th Generation HPE server hardware. The HPE SimpliVity 380 platform has shaped up to be a full-featured platform in virtualized datacenter environments.
RapidDR 3.5.1: dec 2020
RapidDR 3.5.0: oct 2020
RapidDR 3.1.1: feb 2020
RapidDR 3.1: jan 2020
RapidDR 3.0.1: sep 2019
RapidDR 3.0: jun 2019
RapidDR 2.5.1: dec 2018
RapidDR 2.5: oct 2018
RapidDR 2.1.1: jun 2018
RapidDR 2.1: mar 2018
RapidDR 2.0: oct 2017
RapidDR 1.5: feb 2017
RapidDR 1.2: oct 2016
|
GA Release Dates:
HCOS 8.6.5: mar 2020
HCOS 8.5.3: oct 2019
HCOS 8.3.3: jul 2019
HCOS 8.1.3: mar 2019
HCOS 7.4.22: may 2018
HCOS 7.2.24: sep 2017
HCOS 7.1.11: dec 2016
HCOS 6.4.2: apr 2016
HCOS 6.0: feb 2015
HCOS 5.0: oct 2014
ICOS 4.0: aug 2012
ICOS 3.0: may 2012
ICOS 2.0: feb 2010
ICOS 1.0: feb 2009
NEW
8th Generation Scale Computing software on proven Lenovo and SuperMicro server hardware.
Scale Computing HC3s maturity has been steadily increasing ever since the first iteration by expanding its feature set with both foundational and advanced capabilities. Due to its primary focus on small- and midsized organizations, the feature set does not (yet) incorporate some of the larger enterprise capabilities.
HCOS = HyperCore Operating System
ICOS = Intelligent Clustered Operating System
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
Per Node
|
Per Node
|
Per Node
Each Scale Computing HC3 appliance purchased consists of hardware (server+storage), software (all-inclusive) and 1 year of premium support. Optionally end-users can also request for TOR-switches as part of the solution and deployment.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
TOR = Top-of-Rack
|
|
Software Pricing Model
Details
|
Per Node
NEW
Every VxRail 7.0.100 node comes bundled with:
- Dell EMC VxRail Manager 7.0.100
- VxRail Manager Plugin for VMware vCenter
- VMware vCenter Server Virtual Appliance (vCSA) 7.0 U1
- VMware vSphere 7.0 U1
- VMware vSAN 7.0 U1
- VMware vRealize Log Insight 8.1.1.0
- ESRS 3.46
As of VxRail 3.5 VMware vSphere licenses have to be purchased separately. VxRail nodes come pre-installed with VMware vSphere 6.7 U3 Patch01 and require a valid vSphere license key to be entered. VMware vSphere Data Protection (VDP) 6.1 is included as part of the vSphere license and is downloadable through the VxRail Manager.
VMware vSAN licenses have to be purchased separately as well. As of VxRail 4.7 there is a choice of either vSAN 6.7 U3 Standard, Advanced or Enterprise licenses.
Dell EMC VxRail 7.0.100 supports VMware Cloud Foundation (VCF) 4.1. VMware Cloud Foundation (VCF) is a unified SDDC platform that brings together VMware ESXi, VMware vSAN, VMware NSX, and optionally, vRealize Suite components, VMware NSX-T, VMware Enterprise PKS, and VMware Horizon 7 into one integrated stack.
Dell EMC VxRail 7.0 does not support VMware vLCM; vLCM is disabled in vCenter.
Dell EMC VxRail 7.0 does not support appliances based on the Quanta hardware platform.
Dell EMC VxRail 7.0 does not support RecoverPoint for Virtual Machines (RP4VM).
|
Per Node (all-inclusive)
Add-ons: RapidDR (per VM)
There is no separate software licensing for most platform integrated features. By default all software functionality is available regardless of the hardware model purchased.
The HPE SimpliVity 2600 per node software license is tied to the selected CPU and storage configuration.
License Add-ons:
- RapidDR feature
RapidDR uses a 'per VM' licensing model and is available in 25 VM and 100 VM license packs.
|
Per Node (all-inclusive)
There is no separate software licensing. Each node comes equiped with an all-inclusive feature set. This means that without exception all Scale Computing HC3 software capabilities are available for use.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
HC3 Cloud Unity DRaaS requires a monthly subscription that is in part based on Google Cloud Platform (GCP) resource usage (compute, storage, network). The HC3 Cloud Unity DRaaS subscription includes:
- 6 days of Active Mode testing
- Runbook outlining DR procedures
- 1 Runbook failover test and 1 separate Declaration
- Network egress equal to 12.5% of Storage
- ScaleCare Support
In addition end-users and first-time service providers can purchase a DR Planning Service (one-time fee) for onboarding.
|
|
Support Pricing Model
Details
|
Per Node
Dell EMC offers two types of VxRail Appliance Support:
- Enhanced provides 24x7 support for production environments, including around-the-clock technical support, next business day onsite response, proactive remote monitoring and resolution, and installation of non customer replaceable units.
- Premium provides mission critical support for fastest resolution, including 24x7 technical support and monitoring, priority onsite response for critical issues, installation of operating environment updates, and installation of all replacement parts.
|
Per Node
3-year HPE SimpliVity 2600 solution support (24x7x365) is mandatory.
|
Per Node
Each appliance comes with 1 year ScaleCare Premium Support that consists of:
- 24x7x365 by telephone (US and Europe)
- 2 hour response time for critical issues
- Live chat, email support, and general phone on Mo-Fr 8AM-8PM EDST.
- Next Business Day (NBD) delivery of hardware replacement parts
ScaleCare Premium Support also provides remote installation services on the initial deployment of ScaleComputing HC3 clusters.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
|
|
|
Design & Deploy
|
Score:66.7% - Features:7
- Green(Full Support):2
- Amber(Partial):4
- Red(Not support):0
- Gray(FYI only):1
|
Score:66.7% - Features:7
- Green(Full Support):2
- Amber(Partial):4
- Red(Not support):0
- Gray(FYI only):1
|
Score:58.3% - Features:7
- Green(Full Support):2
- Amber(Partial):3
- Red(Not support):1
- Gray(FYI only):1
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Hypervisor
Compute
Storage
Network (limited)
Data Protection (limited)
Management
Automation&Orchestration
VMware is stack-oriented, whereas the VxRail platform itself is heavily storage-focused.
With the vSAN/VxRail platforms VMware aims to provide all functionality required in a Private Cloud ecosystem.
|
Compute
Storage
Data Protection (full)
Management
Automation&Orchestration (DR)
HPE is stack-oriented, whereas the SimpliVity 2600 platform itself is heavily storage- and protection-focused.
HPE SimpliVity 2600 aims to provide key components within a Private Cloud ecosystem as well as integration with existing hypervisors and applications.
|
Hypervisor
Compute
Storage
Networking (optional)
Data Protection
Management
Automation&Orchestration
Scale Computing is stack-oriented.
With the HC3 platform Scale Computing aims to provide all functionality required in a Private Cloud ecosystem.
|
|
|
1, 10, 25 GbE
VxRail hardware models include redundant ethernet connectivity using SFP+ or Base-T. Dell EMC recommends at least 10GbE to avoid the network becoming a performance bottleneck.
VxRail 4.7 added automatic network configuration support for select Dell top-of-rack (TOR)
switches.
VxRail 4.7.211 added support for Qlogic and Mellanox NICs, as well as SmartFabric support for Dell EMC S5200 25Gb TOR switches.
|
1, 10 GbE
HPE SimpliVity 2600 hardware models include ethernet connectivity using SFP+. HPE SimpliVity 2600 recommends 10GbE to avoid the network becoming a performance bottleneck.
|
1, 10 GbE
Scale Computing hardware models include redundant ethernet connectivity in an active/passive setup.
|
|
Overall Design Complexity
Details
|
Medium
Dell EMC VxRail was developed with simplicity in mind, both from a design and a deployment perspective. VMware vSANs uniform platform architecture, running at the core of VxRail, is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities either natively or by leveraging features already present in the VMware hypervisor, vSphere, on a per-VM basis. As there is no tight integration involved, especially with regard to data protection choices need to made whether to incorporate 1st party of 3rd party solutions into the overall technical design.
|
Low
HPE SimpliVity was developed with simplicity in mind, both from a design and a deployment perspective. HPE SimpiVitys uniform platform architecture is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities natively and on a per-VM basis. There are only a handful of storage building blocks to choose from, and many advanced capabilities like deduplication and compression are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
Low
Scale Computing HC3 was developed with simplicity in mind, both from a design and a deployment perspective. The HC3 platform architecture is meant to be applicable to general virtual server infrastructure (VSI) use-cases and seeks to provide important capabilities natively. There are only a few storage building blocks to choose from, and many advanced capabilities like deduplication are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
|
External Performance Validation
Details
|
StorageReview (dec 2018)
Principled Technologies (jul 2017, jun 2017)
StorageReview (Dec 2018)
Title: 'Dell EMC VxRail P570F Review'
Workloads: MySQL OLTP, MSSQL OLTP, Generic profiles
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL), Vdbench (generic)
Hardware: All-Flash Dell EMC VxRail P570F, vSAN 6.7
Principled Technologies (Jul 2017)
Title: 'Handle more orders with faster response times, today and tomorrow'
Workloads: MSSQL OLTP
Benchmark Tools: DS2 (MSSQL)
Hardware: All-Flash Dell EMC VxRail P470F, 4-node cluster, VxRail 4.0 (vSAN 6.2)
Principled Technologies (Jun 2017)
Title: 'Empower your databases with strong, efficient, scalable performance'
Workloads: MSSQL OLTP
Benchmark Tools: DS2 (MSSQL)
Hardware: All-Flash Dell EMC VxRail P470F, 4-node cluster, VxRail 4.0 (vSAN 6.2)
|
Login VSI (jun 2018)
Login VSI (Jun 2018)
Title: 'VMware Horizon 7.4 on HPE SimpliVity
2600'
Workloads: VMware Horizon VDI
Benchmark Tools: Login VSI (VDI)
Hardware: All-Flash HPE SimpliVity 170 Gen10, 4-node cluster / 6-node cluster, OmniStack 3.7.5
|
N/A
No Scale Computing HC3 validated test reports have been published in 2016/2017/2018/2019.
|
|
Evaluation Methods
Details
|
Proof-of-Concept (POC)
vSAN: Free Trial (60-days)
vSAN: Online Lab
VxRail 7.0 runs VMware vSAN 7.0 software at its core. vSAN Evaluation is freely downloadble after registering online. Because it is embedded in the hypervisor, the free trial includes vSphere and vCenter Server. vSAN Evaluation can be installed on all commodity hardware platforms that meet the hardware requirements. vSAN Evaluation use is time-restricted (60-days). vSAN Evaluation is not for production environments.
VMware also offers a vSAN hosted hands-on lab that lets you deploy, configure and manage vSAN in a contained environment, after registering online.
|
Cloud Technology Showcase (CTS)
Proof-of-Concept (POC)
HPE offers no Community Edition or Free Trial edition of their hyperconverged software.
However, HPE maintains a cloud-based evaluation environment in which demos can be conducted and where potential customers can load up their own workloads to execute Proof-of-Concepts. This is called the Cloud Technology Showcase (CTS).
|
Public Facing Clusters
Proof-of-Concept (PoC)
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer
Single-Layer: VMware vSAN is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
VMware vSAN can partially serve in a dual-layer model by providing storage also to other vSphere hosts within the same cluster that do not contribute storage to vSAN themselves or to bare metal hosts. However, this is not a primary use case and also requires the other vSphere hosts to have vSAN enabled (Please view the compute-only scale-out option for more information).
|
Single-Layer
Single-Layer: HPE SimpliVity 2600 is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
Although HPE SimpliVity 2600 can serve in a dual-layer model by providing storage to non-HPE SimpliVity 2600 hypervisor hosts, this would negate many of the platforms benefits as well as the financial business case. (Please view the compute-only scale-out option for more information).
|
Single-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
|
|
Deployment Method
Details
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Dell EMC, customer deployments can be executed in hours instead of days.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by HPE SimpliVity 2600, customer deployments can be executed in hours instead of days.
In HPE SimpliVity 3.7.10 the Deployment Manager allows configuring NIC teaming during the deployment process. With this feature, one or more NICs can be assigned to the Management network, and one or more NICs to the Storage and Federation networks (shared between the two).
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Scale Computing, customer deployments can be executed in hours instead of days.
|
|
|
Workload Support
|
Score:76.9% - Features:14
- Green(Full Support):9
- Amber(Partial):2
- Red(Not support):2
- Gray(FYI only):1
|
Score:65.4% - Features:14
- Green(Full Support):7
- Amber(Partial):3
- Red(Not support):3
- Gray(FYI only):1
|
Score:23.1% - Features:14
- Green(Full Support):2
- Amber(Partial):2
- Red(Not support):9
- Gray(FYI only):1
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Kernel Integrated
Virtual SAN is embedded into the VMware hypervisor. This means it does not require any Controller VMs to be deployed on top of the hypervisor platform.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller
The HPE SimpliVity 2600 OmniStack Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the HPE SimpliVity 2600 storage solution and commits its internal storage to the shared resource pool. The Virtual Storage Controller (VSC) has direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
KVM User Space
SCRIBE runs in KVM user space. Scale Computing made a conscious decision not to make SCRIBE kernel integrated in order to avoid the risk that storage problems would cause a system panic meaning that an entire node could go down as a result.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 7.0 U1
NEW
VMware Virtual SAN is an integral part of the VMware vSphere platform; As such it cannot be used with any other hypervisor platform.
Dell EMC VxRail and vSAN support a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
|
VMware vSphere ESXi 6.5U2-6.7U3
HPE SimpliVity OmniStack 4.0.0 introduced support for VMware vSphere 6.7 Update 3.
HPE SimpliVity 2600 currently does not support Microsoft Hyper-V, whereas HPE SimpliVity 380 does.
|
Linux KVM-based
NEW
ScaleComputing HC3 uses its own proprietary HyperCore operating system and KVM-based hypervisor.
SCRIBE is an integral part of the Linux KVM platform to enable it to own the full software stack. As VMware and Microsoft dont allow such a tight integration, SCRIBE cannot be used with any other hypervisor platform.
Scale Computing HC3 supports a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
The Scale Computing HC3 hypervisor fully supports the following Guest operating systems:
Windows Server 2019
Windows Server 2016
Windows Server 2012 R2
Windows 10
Windows 8.1
CentOS Enterprise Linux
RHEL Enterprise Linux
Ubuntu Server
FreeBSD
SUSE Linux Enterprise
Fedora
Versions supported are versions currently supported by the operating system manufacturer.
SCRIBE = Scale Computing Reliable Independent Block Engine
|
|
Hypervisor Interconnect
Details
|
vSAN (incl. WSFC)
VMware uses a propietary protocol for vSAN.
vSAN 6.1 and upwards support the use of Microsoft Failover Clustering (MSFC). This includes MS Exchange DAG and SQL Always-On clusters when a file share witness quorum is used. The use of a failover clustering instance (FCI) is not supported.
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 introduced native support for SCSI-3 Persistent Reservations (PR), which enables Windows Server Failover Clusters (WSFC) to be directly deployed on native vSAN VMDKs. This capability enables migrations from legacy deployments on physical RDMs or external storage protocols to VMware vSAN.
|
NFS
NFS is used as the storage protocol in vSphere environments.
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
Libscribe
In order to read/write from/to Scale Computing HC3 block devices (aka Virtual SCRIBE Devices or VSD for short) the Libscribe component needs to be installed in KVM on each physical host. Libscribe is part of the QEMU process and presents virtual block devices to the VM. Because Libscribe is a QEMU block driver, SCRIBE is a supported device type and qemu-img commands work by default.
Although a virtIO driver doesnt need to be installed perse in each VM, it is highly recommended as I/O performance benefits greatly from it. IO submission takes place via the Linux Native Asynchronous I/O (AIO) that is present in KVM.
Shared storage devices in virtual Windows Clusters are supported.
QEMU = Quick Emulator
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
N/A
Dell EMC VxRail does not support any non-hypervisor platforms.
|
N/A
HPE SimpliVity 2600 does not support any non-hypervisor platforms.
|
N/A
Scale Computing HC3 does not support any non-hypervisor platforms.
|
|
Bare Metal Interconnect
Details
|
N/A
Dell EMC VxRail does not support any non-hypervisor platforms.
|
N/A
HPE SimpliVity 2600 does not support any non-hypervisor platforms.
|
N/A
Scale Computing HC3 does not support any non-hypervisor platforms.
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (Hypervisor-based, vSAN supported)
VMware vSphere Docker Volume Service (vDVS) technology enables running stateful containers backed by storage technology of choice in a vSphere environment.
vDVS comprises of Docker plugin and vSphere Installation Bundle which bridges the Docker and vSphere ecosystems.
vDVS abstracts underlying enterprise class storage and makes it available as docker volumes to a cluster of hosts running in a vSphere environment. vDVS can be used with enterprise class storage technologies such as vSAN, VMFS, NFS and VVol.
vSAN 6.7 U3 introduces support for VMware Cloud Native Storage (CNS). When Cloud Native Storage is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
|
N/A
HPE SimpliVity 2600 relies on the container support delivered by the hypervisor platform.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Platform Compatibility
Details
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Platform Interconnect
Details
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Host OS Compatbility
Details
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Orch. Compatibility
Details
|
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS).
When Cloud Native Storage (CNS) is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS, vVols) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
VCP = vSphere Cloud Provider
CSI = Container Storage Interface
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Orch. Interconnect
Details
|
Kubernetes Volume Plugin
The VMware vSphere Container Storage Interface (CSI) Volume Driver for Kubernetes leverages vSAN block storage and vSAN file shares to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree VMware vSphere CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using vSAN block storage and vSAN native files shares dynamically provisioned by VMware vSAN File Services.
The VMware vSphere CSI Volume Driver requires Kubernetes v1.14 or later and VMware vSAN 6.7 U3 or later. vSAN File Services requires VMware vSAN/vSphere 7.0.
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, vVol, VMFS or NFS datastores.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
Dell EMC has published Reference Architecture whitepapers for both VMware Horizon and Citrix XenDesktop platforms.
Dell EMC VxRail 4.7.211 supports VMware Horizon 7.7
|
VMware Horizon
Citrix XenDesktop
HPE SimpliVity OmniStack 3.7.8 introduces support for VMware Horizon Instant Clone provisioning technology for vSphere 6.7.
HPE SimpliVity 2600 has published a Reference Architecture whitepaper for VMware Horizon 7.4. HPE SimpliVity 2600 has been validated by LoginVSI.
|
Citrix XenDesktop
Parallels RAS
Leostream
Scale Computing HC3 HyperCore is a Citrix Ready platform. XenDesktop 7.6 LTSR, 7.8 and 7.9 are officially supported.
Scale Computing HC3 also actively supports the following desktop virtualization software:
- Parallels Remote Application Server (RAS);
- Leostream (=connection management).
A joint Reference Configuration white paper for Parallels RAS on Scale Computing HC3 was published in June 2019.
A joint Quick Start with Scale Computing HC3 and Leostream white paper was released in March 2019.
Since Scale Computing HC3 does not support the VMware vSphere hypervisor, VMware Horizon is not an option.
|
|
|
VMware: up to 160 virtual desktops/node
Citrix: up to 140 virtual desktops/node
VMware Horizon 7.7: Load bearing number is based on Login VSI tests performed on hybrid VxRail V570F appliances using 2 vCPU Windows 10 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.15: Load bearing number is based on Login VSI tests performed on hybrid VxRail V570F-B appliances using 2 vCPU Windows 10 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding reference architecture whitepapers.
|
VMware: up to 175 virtual desktops/node
Citrix: unknown
VMware Horizon 7.4: Load bearing number is based on Login VSI tests performed on HPE SimpliVity 170 Gen10 all-flash model using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
|
Workspot: 40 virtual desktops/node
Workspot VDI 2.0: Load bearing number is based on Login VSI tests performed on hybrid HC2150 appliances using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding whitepaper. Please note that this technical whitepaper is dated August 2016 and that Workspot VDI 2.0 no longer exists. Workspots current portfolio only includes cloud solutions that run in Microsoft Azure.
Scale Computing has not published any Reference Architecture whitepapers for the Citrix XenDesktop platform.
|
|
|
Server Support
|
Score:76.9% - Features:13
- Green(Full Support):8
- Amber(Partial):4
- Red(Not support):1
- Gray(FYI only):0
|
Score:57.7% - Features:13
- Green(Full Support):3
- Amber(Partial):9
- Red(Not support):1
- Gray(FYI only):0
|
Score:65.4% - Features:13
- Green(Full Support):5
- Amber(Partial):7
- Red(Not support):1
- Gray(FYI only):0
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Dell
Dell EMC uses a single brand of server hardware for its VxRail solution. Since completion of the Dell/EMC merger, Dell EMC has shifted from Quanta to Dell PowerEdge server hardware. This coincides with the VxRail 4.0 release (dec 2016).
In november 2017 Dell refreshed its VxRail hardware base with 14th Generation Dell PowerEdge server hardware.
|
HPE
HPE SimpliVity 2600 deployments are solely based on HPE Apollo 2600 Gen10 server hardware.
|
Lenovo (native and OEM)
SuperMicro (native)
Scale Computing leverages both Lenovo and SuperMicro server hardware as building blocks for is native HC3 appliances:
HC1200 is Supermicro server hardware
HC1250 is Supermicro server hardware
HC1250D is Lenovo server hardware
HC1250DF is Lenovo server hardware
HC5250D is Lenovo server hardware
Scale Computing has maintained a partnership with MBX Systems since 2012. MBX Systems is a hardware integrator based in the US, with headquarters both in Chicago and San Jose, that is tasked with assembling the native HC3 appliances.
In May 2018 Scale Computing and Lenovo entered in an OEM partnership to provide Scale Computing HC3 software on Lenovo ThinkSystem tower (ST250) or rack servers (SR630, SR650, SR250) with a wide variety of hardware choices (eg. CPU and RAM).
|
|
|
5 Dell (native) Models (E-, G-, P-, S- and V-Series)
Different models are available for different workloads and use cases:
E Series (1U-1Node) - Entry Level
G Series (2U-4Node) - High Density
V Series (2U-1Node) - VDI Optimized
P Series (2U-1Node) - Performance Optimized
S Series (2U-1Node) - Storage Dense
E-, G-, V- and P-Series can be acquired as Hybrid or All-Flash appliance. S-Series can only be acquired as Hybrid appliance.
|
2 models
HPE SimpliVity 2600 is available in 2 series:
170 Gen10-series
190 Gen10-series
HPE positions HPE Simplivity 2600 as a VDI Optimized Hyperconverged Solution. The platform is also beneficial for compute-intensive workload
use cases in high density environments.
The Gen10 6000-series is best for high performance, IO intensive mixed workloads, whereas the gen10 4000-series is best for typical workloads (heavy reads/lower ratio of writes) at lower cost than the 6000-series. The difference between 4000/6000 is the SSD-type that is inserted in the server hardware.
There are no HPE SimpliVity 2600 Hybrid (SSD+HDD) models to choose from.
|
4 Native Models
NEW
There are 4 native model series to choose from:
HE100 Edge Computing/Remote offices, stores, warehouses, labs, classrooms, ships
HE500 Edge Computing/Small remote sites/DR
HC1200 SMB/Midmarket
HC5000 Enterprise/Distributed Enterprise
There are 4 Lenovo model series to choose from:
ST250 Edge, Backup
SR250 Edge
SR630 Mid-market
SR650 Mid-market, High Capacity
|
|
|
1 or 4 nodes per chassis
The VxRail Appliance architecture uses a combination of 1U-1Node, 2U-1Node and 2U-4Node building blocks.
|
170-series: 3-4 nodes per chassis
190-series: 2 nodes per chassis
The HPE Apollo 2600 server is a 2U/4-node building block. The nodes in each system have an identical hardware configuration.
Up to 4 slots in each chassis may be used for placement of HPE SimpliVity 2600 170-series nodes.
Up to 2 slots in each chassis may be used for placement of HPE SimpliVity 2600 190-series nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1 node per chassis
NEW
Scale Computing HE100 appliances are Intel NUCs.
Scale Computing HE500 appliances are either 1U building blocks or Towers.
Scale Computing HC1200 appliances are 1U building blocks.
Scale Computing HC5000 appliances are 2U building blocks.
Lenovo HC3 Edge ST250 appliances are Towers.
Lenovo HC3 Edge SR250 appliances are 1U building blocks.
Lenovo HC3 Edge SR630 appliances are 1U building blocks.
Lenovo HC3 Edge SR650 appliances are 2U building blocks.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
NUC = Next Unit of Computing
|
|
|
Yes
Dell EMC allows mixing of different VxRail Appliance models within a cluster, except where doing so would create highly unbalanced performance. The first 4 cluster nodes do not have to be identical anymore (previous requirement). All G Series nodes within the same chassis must be identical. No mixing is allowed between hybrid and all-flash nodes within the same storage cluster. All nodes within the same cluster must run the same version of VxRail software.
|
Partial
For mixing HPE SimpliVity 2600 nodes in a cluster, HPE recommends following these general guidelines:
- Only models of equal socket count are supported.
- All hosts should contain equal amounts of CPU & Memory.
- As a best practice, it’s recommended to use the same CPU model within a single cluster.
Heterogenous Federation Support: Although HPE SimpliVity 380 nodes cannot be mixed with HPE SimpliVity 2600 nodes or legacy SimpliVity nodes within the same cluster, they can coexist with such clusters within the same Federation.
HPE OmniStack 3.7.9 introduces suppor for using different versions of the OmniStack software within a federation. Some clusters can have hosts using HPE OmniStack 3.7.9 and other clusters can have hosts all using HPE OmniStack 3.7.8 and above. The hosts in each datacenter and cluster must use the same version of the software.
|
Yes
Scale Computing allows for mixing different server hardware in a single HC3 cluster, including nodes from different generations.
|
|
|
|
Components |
|
|
|
Flexible
VxRail offers multiple CPU options in each hardware model.
E-Series can have single or dual socket, and up to 28 cores/CPU.
G-Series can have single or dual socket, and up to 28 cores/CPU.
P-Series can have dual or quad socket, and up to 28 cores/CPU.
S-Series can have single or dual socket, and up to 28 cores/CPU.
V-Series can have dual socket, and up to 28 cores/CPU.
VxRail on Dell PowerEdge 14G servers are equiped with Intel Xeon Scalable processors (Skylake and Cascade Lake).
Dell EMC VxRail 4.7.211 introduced official support for the 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
|
Flexible
HP SimpliVity 2600 on Apollo Gen10 server hardware:
- Choice of Intel Xeon Scalable (Skylake) Silver and Gold processors (2x per node), 12 to 22 cores selectable.
- Single socket option or dual socket option with less cores (8 or 10) for ROBO deployments only.
Although HPE does support 2nd generation Intel Xeon Scalable (Cascade Lake) processors in its ProLiant server line-up as of April 2019, HPE SimpliVity nodes do not yet ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
|
Flexible: up to 3 options (native); extensive (Lenovo OEM)
Scale Computing HE100-series CPU options:
HE150: 1x Intel i3-10110U (2 cores); 1x Intel i5-10210U (4 cores); 1x i7-10710U (6 cores)
Scale Computing HE500-series CPU options:
HE500: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550F: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE500T: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550TF: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
Scale Computing HC1200-series CPU options:
HC1200: 1x Intel Xeon Bronze 3204 (6 cores); 1x Intel Xeon Silver 4208 (8 cores)
HC1250: 1x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6242 (16 cores)
HC1250D: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores); 2x Intel Xeon Gold 6244 (8 cores)
HC1250DF: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores); 2x Intel Xeon Gold 6244 (8 cores)
Scale Computing HC5000-series CPU options:
HC5200: 1x Intel Xeon Silver 4208 (8 cores); 1x Intel Xeon Silver 4210 (10 cores); 1x Intel Xeon Gold 6230 (20 cores)
HC5250D: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores)
Scale Computing HC1200 and HC5000 series nodes ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
Lenovo HC3 Edge CPU options:
ST250: 1x Intel Xeon E-2100
SR250: 1x Intel Xeon E-2100
SR630: 2x 1st or 2nd generation Intel Xeon Scalable (Skylake or Cascade Lake)
SR650: 2x 1st or 2nd generation Intel Xeon Scalable (Skylake or Cascade Lake)
|
|
|
Flexible
The amount of memory is configurable for all hardware models. Depending on the hardware model Dell EMC offers multiple choices on memory per Node, maxing at 3TB for E-, P-, S-, V-series and 2TB for G-Series. VxRail Appliances use 16GB RDIMMS, 32GB RDIMMS, 64GB LRDIMMS or 128GB LRDIMMS.
|
Flexible
HP SimpliVity 2600 on Apollo Gen10 server hardware:
- 384GB to 768GB per node selectable.
- 128GB per node selectable for ROBO deployments only.
|
Flexible: up to 8 options
Scale Computing HE100-series memory options:
HE150: 8GB, 16GB, 32GB, 64GB
Scale Computing HE500-series memory options:
HE500: 16GB, 32GB, 64GB
HE550: 16GB, 32GB, 64GB
HE550F: 16GB, 32GB, 64GB
HE500T: 16GB, 32GB, 64GB
HE500TF: 16GB, 32GB, 64GB
Scale Computing HC1200-series memory options:
HC1200: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB
HC1250: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB
HC1250D: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
HC1250DF: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
Scale Computing HC5000-series memory options:
HC5200: 64GB, 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
HC5250D: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB, 1TB, 1.5TB
Lenovo HC3 Edge series memory options:
ST250: 16GB - 64GB
SR250: 16GB - 64GB
SR630: 64GB - 768GB
SR650: 64GB - 1.5TB
|
|
|
Flexible: number of disks (limited) + capacity
A 14th generation VxRail appliance has 24 disk slots.
E-Series supports 10x 2.5' SAS drives per node (up to 2 disk groups: 1x flash + 4x capacity drives each).
G-Series supports 6x 2.5' SAS drives per node (up to 1 disk group: 1x flash + 5x capacity drives each).
P-Series supports 24x 2.5' SAS drives per node (up to 4 disk groups: 1x flash + 5x capacity drives each).
V-Series supports 24x 2.5' SAS drives per node (up to 4 disk groups: 1x flash + 5x capacity drives each).
S-Series supports 12x 2.5' + 2x 3,5' SAS drives per node (up to 2 disk groups: 1x flash + 6 capacity drives each).
|
Fixed: number of disks + capacity
For HP SimpliVity 260 on Apollo Gen10 server hardware, the following kits are selectable per node:
XS (6x 1.92TB SSD in RAID1+0)
The always-on inline deduplication and compression by default allows HPE SimpliVity 2600 to have much higher amounts of effective storage capacity on a single node than the raw disk capacity would indicate.
|
Capacity: up to 5 options (HDD, SSD)
Fixed: Number of disks
Scale Computing HE100-series storage options:
HE150: 1x 250GB/500GB/1TB/2TB M.2 NVMe
Scale Computing HE500-series storage options:
HE500: 4x 1/2/4/8TB NL-SAS [magnetic-only]
HE550: 1x 480GB/960GB SSD + 3x 1/2/4TB NL-SAS [hybrid]
HE550F: 4x 240GB/480GB/960GB SSD [all-flash]
HE500T: 4x 1/2/4/8TB NL-SAS + 8x 4/8TB NL-SAS [magnetic-only]
HE550TF: 4x 240GB/480GB/960GB SSD [all-flash]
Scale Computing HC1200-series storage options:
HC1200: 4x 1/2/4/8/12TB NL-SAS [magnetic-only]
HC1250: 1x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8/12TB NL-SAS [hybrid]
HC1250D: 1x 960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
HC1250DF: 4x 960GB/1.92TB/3.84TB/7.68TB SSD [all-flash]
Scale Computing HC5000-series storage options:
HC5200: 12x 8/12TB NL-SAS [magnetic-only]
HC5250D: 3x 960GB/1.92TB/3.84TB/7.68TB SSD + 9x 4/8TB NL-SAS [hybrid]
Lenovo HC3 Edge series storage options:
ST250: 8x 1/2/4/8TB NL-SAS [magnetic only]
ST250: 4x 960GB/1.92TB/3.84TB SSD [all-flash]
SR250: 4x 1/2/4/8TB NL-SAS [magnetic only]
SR250: 1x 960GB/1.92TB/3.84TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
SR250: 4x 960GB/1.92TB/3.84TB SSD [all-flash]
SR630: 4x 1/2/4/8TB NL-SAS [magnetic only]
SR630: 1x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
SR630: 4x 1.92TB/3.84TB/7.68TB SSD [all-flash]
SR650: 3x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 9x 1/2/4/8TB NL-SAS [hybrid]
The SSDs in all mentioned nodes are normal SSDs (non-NMVe).
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
Flexible (1, 10 or 25 Gbps)
E-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
G-Series supports 2x 25 GbE (SFP28) or 2x 10GbE (RJ45/SFP+) per node.
P-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
S-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
V-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) per node.
1GbE configurations are only supported with 1-CPU configurations.
Dell EMC VxRail 4.7.300 provides more network design flexibility in creating VxRail clusters across multiple racks. It also provides the ability to expand a cluster beyond one rack using L3 networks and L2 networks.
|
Flexible: additional 10Gbps (190-series only)
HP SimpiVity 2600 190 Gen10-series:
One optional 10 Gbps or 10/25 Gbps PCI adapter can be added to a node.
In HPE SimpliVity 3.7.10 the Deployment Manager allows configuring NIC teaming during the deployment process. With this feature, one or more NICs can be assigned to the Management network, and one or more NICs to the Storage and Federation networks (shared between the two).
|
Fixed: HC1200/5000: 10GbE; HE150/500T: 1GbE
Flexible: HE500: 1/10GbE
Scale Computing HE100-series networking options:
HE150: 1x 1GbE
Scale Computing HE500-series networking options:
HE500: 4x 1GbE or 4x 10GbE SFP+
HE550: 4x 1GbE or 4x 10GbE SFP+
HE550F: 4x 1GbE or 4x 10GbE SFP+
HE500T: 2x 1GbE
HE550TF: 2x 1GbE
Scale Computing HC1200-series networking options:
HC1200: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250D: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250DF: 4x 10GbE Base-T/SFP+ bonded active/passive
Scale Computing HC5000-series networking options:
HC5200: 4x 10GbE Base-T/SFP+ bonded active/passive
HC5250D:4x 10GbE Base-T/SFP+ bonded active/passive
Lenovo HC3 Edge series networking options:
ST250: 2x 1GbE
SR250: 4x 1GbE or 4x 10GbE SFP+
SR630: 4x 10GbE BaseT or 4x 10GbE SFP+
SR650: 4x 10GbE BaseT or 4x 10GbE SFP+
|
|
|
NVIDIA Tesla
Dell EMC offers multiple GPU options in the VxRail V-series Appliances.
Currently the following GPUs are provided as add-on in PowerEdge Gen14 server hardware (V-model only):
NVIDIA Tesla M10 (up to 2x in each node)
NVIDIA Tesla M60 (up to 3x in each node)
NVDIA Tesla P40 (up to 3x in each node)
|
NVIDIA Tesla (190-series only)
HPE SimpliVity 2600 offers a GPU option in the HPE SimpliVity 190 Gen10-series for leveraging vGPU in virtual desktop/application environments.
Currently HPE SimpliVity 2600 190 Gen10-series supports the following GPUs in a single server:
2x NVIDIA Tesla M10
|
N/A
Scale Computing HC3 currently does not provide any GPUs options.
|
|
|
|
Scaling |
|
|
|
Memory
Storage
Network
GPU
At the moment CPUs are not upgradeable in VxRail.
|
CPU
Memory
GPU
|
CPU
Memory
The Scale Computing HC3 platform allows for expanding CPU and Memory hardware resources. Storage resources (the number of disks within a single node) are usually not expanded.
|
|
|
Compute+storage
Storage+Compute: Existing VxRail clusters can be expanded by adding additional VxRail nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: VMware does not allow non-VxRail hosts to become part of a VxRail cluster. This means that installing and enabling the vSAN VMkernel on hosts in a VxRail cluster that is not contributing storage, so vSAN datastores can be presented to these hypervisor hosts as well, is not an option at this time.
Storage-only: N/A; A VxRail node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
Compute+storage
Compute-only
Storage+Compute: Existing HPE SimpliVity 2600 federations can be expanded by adding additional HPE SimpliVity 2600 nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: Because HPE SimpliVity 2600 leverages a file-level protocol (NFS), storage can be presented to hypervisor hosts not participating in the HPE SimpliVity 2600 cluster. This is also beneficial to migrations, since it allows for online storage vMotions between HPE SimpliVity 2600 and non-HPE SimpliVity 2600 storage platforms.
Storage-only: N/A; A HPE SimpliVity 2600 node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
Storage+Compute
Storage-only
Storage+Compute: Existing Scale Computing HC3 clusters can be expanded by adding additional nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: N/A; A Scale Computing HC3 node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
Storage-only: A Scale Computing HC3 node can be configured as a storage-only node by setting a flag and has to be performed by Scale Computing engineering (end-user organizations cannot set the flag themselves).
|
|
|
3-64 storage nodes in 1-node increments
At minimum a VxRail deployment consists of 3 nodes. From there the solution can be scaled one or more nodes at a time. The first 4 cluster nodes must be identical.
Scaling beyond 32 nodes no longer requires a Request for Product Qualification (RPQ). However, an RPQ is required for Stretched Cluster implementations
If using 1GbE only, a storage cluster cannot expand beyond 8 nodes.
For the maximum node configuration hypervisor cluster scale-out limits still apply: 64 hosts for VMware vSphere.
VxRail 4.7 introduces support for adding multiple nodes in parallel, speeding up cluster expansions.
|
vSphere: 2-16 storage nodes (cluster); 2-8 storage nodes (stretched cluster); 2-32+ storage nodes (Federation) in 1-node increments
HPE SimpliVity 2600 currently offers support for up to 16 storage nodes and 720 VMs within a single VSI cluster, and up to 8 storage noes within a single VDI cluster. Up to 32 storage nodes are supported within a single Federation. Multiple Federations can be used in either single-site or multi-site deployments, allowing for a scalable as well as a flexible solution. Data protection can be configured to run in between federations.
Cluster scale enhancements (16 nodes instead of 8 nodes within a single cluster) apply to new as well as existing SimpliVity clusters that run OmniStack 3.7.7.
For specific use-cases a Request for Product Qualification (RPQ) process can be initiated to authorize more than 32 storage nodes within a single Federation.
HPE SimpliVity 2600 also supports adding compute nodes to a storage node cluster in vSphere environments.
Stretched Clusters with Availability Zones remain supported for up to 8 HPE OmniStack hosts (4 per Availability Zone).
OmniStack 3.7.7 introduces support for multi-host deployment at one time to a cluster.
HPE OmniStack 3.7.8 introduces support for 48 clusters per Federation (previously 16 clusters) when using multiple vCenter Servers in Enhanced Linked Mode.
VDI = Virtual Desktop Infrastructure
VSI = Virtual Server Infrastructure
|
3-8 nodes in 1-node increments
There is a maximum of 8 nodes within a single cluster. Larger clusters do exist, but must be requested and are evaluated on a per use-case basis.
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
VxRail 4.7.100 introduced support for 2-node clusters:
- The deployment is limited to VxRail Series E-Series nodes.
- Only 1GbE and 10GbE are supported. Inter-cluster VxRail traffic utilizes a pair of network cables linke between the physical nodes.
- A customer-supplied external vCenter is required that does not reside on the 2-node cluster.
- A Witness VM that monitors the health of the 2-node cluster is required and does not reside on the 2-node cluster.
2-node clusters are not supported when using the VxRail G410 appliance.
|
2 Node minimum
HPE SimpliVity 2600 supports 2-node configurations without sacrificing any of the data reduction and data protection capabilities. The HPE SimpliVity 2600 is ideal for ROBO deployments.
All the remote sites can be centrally managed from a single dasboard at the central site.
|
1 Node minimum
|
|
|
Storage Support
|
Score:87.5% - Features:14
- Green(Full Support):9
- Amber(Partial):3
- Red(Not support):0
- Gray(FYI only):2
|
Score:45.8% - Features:14
- Green(Full Support):2
- Amber(Partial):7
- Red(Not support):3
- Gray(FYI only):2
|
Score:79.2% - Features:14
- Green(Full Support):7
- Amber(Partial):5
- Red(Not support):0
- Gray(FYI only):2
|
|
|
|
General |
|
|
|
Object Storage File System (OSFS)
|
Parallel File System
on top of Object Store
Both the File System and the Object Store have been internally developed by HPE SimpliVity 2600.
|
Block Storage Pool
Scale Computing HC3 only serves block devices to the supported OS guest platforms. VMs running on HC3 have direct, block-level access to virtual SCRIBE devices (VSDs, aka virtual disks) in the clustered storage pool without the complexity or performance overhead introduced by using remote storage protocols.
A critical software component of HyperCore is the Scale Computing Reliable Independent Block Engine, known as
SCRIBE. SCRIBE is an enterprise class, clustered block storage layer, purpose built to be consumed by the HC3 embedded KVM based hypervisor directly.
SCRIBE discovers and aggregates all block storage devices across all nodes of the system into a single managed pool of storage. All data written to this pool is immediately available for read and write by any and every node in the storage system, allowing for sophisticated data redundancy, data deduplication, and load balancing schemes to be used by higher layers of the stack—such as the HyperCore
compute layer.
SCRIBE is a wide-striped storage architecture that combines all disks in the cluster into a single storage pool that is tiered between flash SSD and spinning HDD storage.
|
|
|
Partial
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example. Apart from the read-cache VxRail only uses data locality in stretched clusters to avoid high latency.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Full
When a VM is created, it is optimally placed on the best 2 nodes available. When data is written, it is deduplicated and compressed on arrival, and then stored on the local node as well as a dedicated partner node. To keep the performance optimal through the VMs lifecycle, OmniStack automatically creates VMware vSphere DRS affinity rules and policies. This is called Intelligent Workload Optimization. VMware DRS is made aware of where the data of an individual VM is. In effect, the VM follows the data rather than having the data follow the VM, as this prevents heavy moves of data to the VM. When a HPE SimpliVity 2600 node is added to the federation, the DRS rules related to OmniStack are automatically re-evaluated.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
None
Scale Computing HC3 is based on a shared nothing storage architecture. Scale Computing HC3 enables every drive in every node throughout the cluster to contribute to the storage performance and capacity of every virtual disk (VDS) presented by the SCRIBE storage layer. When a VM is moved to another node, data remains in place and does not follow the VM because data is stored and available across all nodes residing in the cluster.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
The software takes ownership of the unformatted physical disks available inside the host.
VMware vSAN 7.0 U1 introduces the HCI Mesh concept. With VMware HCI Mesh a vSAN cluster can leverage the storage of remote vSAN clusters for hosting VMs without sacrificing important features such as HA and DR. Up to 5 remote vSAN datastores can be mounted by a single vSAN cluster. HCI Mesh works by using the existing vSAN VMkernel ports and transport protocols. It is full software-based and does not require any specialized hardware.
|
Direct-attached (RAID)
The software takes ownership of the RAID groups provisioned by the servers hardware RAID controller.
|
Direct-Attached (Raw)
Direct-attached: The software takes ownership of the unformatted physical disks available each host.
|
|
|
Hybrid (Flash+Magnetic)
All-Flash
Hybrid hosts cannot be mixed with All-Flash hosts in the same VxRail cluster.
|
All-Flash (SSD-only)
HPE has exclusively released all-flash models for the HPE SimpliVity 2600 platform. These models facilitate a variety of workloads including those that demand ultra-high low-latency performance.
|
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
Scale Computing HC3 appliance models storage composition:
HC1200: Magnetic-only
HC1250: Hybrid
HC1250D: Hybrid
HC1250DF: All-flash
HC5250D: Hybrid
A Magnetic-only node is called a Non-tiered node and contains 100% HDD drives and no SSD drives.
A Hybrid node is called a Tiered node and contains 25% SSD drives and 75% HDD drives.
An All-Flash node contains 100% SSD drives and no HDD drives.
|
|
Hypervisor OS Layer
Details
|
SSD
Each VxRail Gen14 node contains 2x 240GB SATA M.2 SSDs with RAID1 protection to host the VMware vSphere hypervisor software.
|
SSD
1x 480GB M.2 SSD is used for system boot.
|
HDD or SSD (partition)
By default for each 1TB of data, 8MB is allocated for metadata. The data and metadata is stored on the physical storage devices (RSDs) and both are protected using mirroring (2N). Because metadata is this lightweight, all of the metadata of all of the online VSDs is cached in DRAM.
VSD = Virtual SCRIBE Device
RSD = Real SCRIBE Device
|
|
|
|
Memory |
|
|
|
DRAM
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
DRAM (VSC)
|
DRAM
|
|
|
Read Cache
|
DRAM (VSC): Read Cache
|
Read Cache
Metadata structures
By default for each 1TB of data, 8MB is allocated for metadata. The data and metadata is stored on the physical storage devices (RSDs) and both are protected using mirroring (2N). Because metadata is this lightweight, all of the metadata of all of the online VSDs is cached in DRAM.
VSD = Virtual SCRIBE Device
RSD = Real SCRIBE Device
|
|
|
Non-configurable
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
DRAM (VSC): 16-48GB for Read Cache
Each Virtual Storage Controller (VSC) is equipped with 48-100GB total memory capacity, of which 16-48GB is used as read cache. The amount of memory allocated is fixed (non-configurable) and model dependent.
|
4GB+
4GB of RAM is reserved per node for the entire HC3 system to function. No specific RAM is reserved for caching but the system will use any available memory as needed for caching purposes.
|
|
|
|
Flash |
|
|
|
SSD; NVMe
VxRail Appliances support a variety of SSDs.
Cache SSDs (SAS): 400GB, 800GB, 1.6TB
Cache SSDs (NVMe): 800GB, 1.6TB
Capacity SSDs (SAS/SATA): 1.92TB, 3.84TB, 7.68TB
Capacity SSDs (NVMe): 960GB, 1TB, 3.84TB, 4TB
VxRail does not support mixing SAS/SATA SSDs in the same disk group.
|
SSD
|
SSD, NVMe
HyperCore-Direct for NVMe can be requested and is evaluated by Scale Computing on a per-customer scenario basis.
|
|
|
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
In all VxRail configurations 1 separate SSD per disk group is used for caching purposes. The other disks in the disk group are used for persistent storage of data.
For All-flash configurations, the flash device(s) in the cache tier are used for write caching only (no read cache) as read performance from the capacity flash devices is more than sufficient.
Two different grades of flash devices are used in an All-flash VxRail configuration: Lower capacity, higher endurance devices for the cache layer and more cost effective, higher capacity, lower endurance devices for the capacity layer. Writes are performed at the cache layer and then de-staged to the capacity layer, only as needed.
|
All-Flash: Metadata + Write Buffer + Persistent Storage Tier
Read cache is not necessary in All-flash configurations.
|
Persistent Storage
|
|
|
Hybrid: 4 Flash devices per node
All-Flash: 2-24 Flash devices per node
Each VxRail node always requires 1 high-performance SSD for write caching.
In Hybrid VxRail configurations the high-performance SSD in a disk group is also used for read caching. Per disk group 3-5 HDDs can be used as persistent storage (capacity drives).
In All-Flash VxRail configurations the high-performance SSD in a disk group is only used for write caching. Per node 1-5 SSDs can be used for read caching and persistent storage (capacity drives).
|
All-Flash: 6 SSDs per node
All-Flash SSD configurations:
170-series: 6x 1.92TB
190-series: 6x 1.92TB
|
Hybrid: 1-3 SSDs per node
All-Flash: 4 SSDs per node
Flash devices are not mandatory in a Scale Computing HC3 solution.
Each HC1200 hybrid node has 1 SSD drive attached.
Each HC1250 all-flash node has 4 SSD drives attached.
Each HC5250 node has 3 SSD drives attached.
An HC1250 all-flash node can have a maximum of 15.36TB of raw SSD storage attached.
|
|
|
|
Magnetic |
|
|
|
Hybrid: SAS or SATA
VxRail Appliances support a variety of HDDs.
SAS 10K: 1.2TB, 1.8TB, 2.4TB
SATA 7.2K: 2.0TB, 4.0TB
VMware vSAN supports the use of 512e drives. 512e magnetic hard disk drives (HDDs) use a physical sector size of 4096 bytes, but the logical sector size emulates a sector size of 512 bytes. Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities.
VMware vSAN 6.7 introduces support for 4K native (4Kn) mode.
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
N/A
|
Hybrid: SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
Persistent Storage
|
N/A
|
Persistent Storage
|
|
Magnetic Capacity
Details
|
3-5 SAS/SATA HDDs per disk group
In the current configurations there is a choice between 1.2/1.8/2.4TB 10k SAS drives and 2.0/4.0TB 7.2k NL-SAS drives.
The current configuration maximum for a single host/node is 4 disk groups consisting of 1 NVMe drive + 5 HDDs for hybrid configurations or 1 NVMe drive + 5 capacity SSDs for all-flash configurations = total of 24 drives per host/node.
Since a single VxRail G-Series chassis can contain up to 4 nodes, theres a total of 6 drives per node.
|
N/A
|
Magnetic-only: 4 HDDs per node
Hybrid: 3 or 9 HDDs per node
Magnetic devices are not mandatory in a Scale Computing HC3 solution.
Each HC1200 magnetic-only node has 4 HDD drives attached.
Each HC1250 hybrid node has 3 HDD drives attached.
Each HC5250 hybrid node has 9 HDD drives attached.
An HC1200 magnetic-only node can have a maximum of 32TB of raw HDD storage attached.
An HC1250 hybrid node can have a maximum of 24TB of raw HDD storage attached.
An HC5250 hybrid node can have a maximum of 72TB of raw HDD storage attached.
|
|
|
Data Availability
|
Score:70.0% - Features:30
- Green(Full Support):18
- Amber(Partial):6
- Red(Not support):6
- Gray(FYI only):0
|
Score:81.7% - Features:30
- Green(Full Support):22
- Amber(Partial):5
- Red(Not support):3
- Gray(FYI only):0
|
Score:61.7% - Features:30
- Green(Full Support):15
- Amber(Partial):7
- Red(Not support):8
- Gray(FYI only):0
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
Flash Layer (SSD, NVMe)
|
Flash Layer (SSD)
The HPE SimpliVity 2600 does not contain a propietary PCIe based HPE OmniStack Accelerator Card.
|
Flash/HDD
The persisent write buffer depends on the type of the block storage pool (Flash or HDD).
|
|
Disk Failure Protection
Details
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduces the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
1 Replica (2N)
+ Hardware RAID (5 or 6)
HPE SimpliVity 2600 uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated on a designated partner node. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. When a physical disk fails, hardware RAID maintains data availability.
Only when more than 2 disks fail within the same node, data has to be read from the partner node instead. Given the high level of redundancy within and across nodes, the desire to reduce unnecessary I/O, and the fact that most node outages are easily recoverable, node level redundancy is re-established based on a user initiated action.
|
2-way Mirroring (Network RAID-10)
Within a Scale Computing HC cluster all data is written twice to the block storage pool for redundancy (2N). It is equivalent to Network RAID-10, as the two data chunks are placed on separate physical disks of separate physical nodes within the cluster. This protects against 1 disk failure and 1 node failure at the same time, and aggregates the I/O and throughput capabilities of all the individual disks in the cluster (= wide striping).
Once an RSD fails, the system re-mirrors the data using the free space in the HC3 cluster as a hot spare. Because all physical disks contain data, rebuilds are very fast. Scale Computing HC3 is often to detect the deteriorated state of a physical storage device in advance and pro-actively copy data to other devices ahead of an actual failure.
Currently only 1 Replica (2N) can be maintained, as the setting is not configurable for end-users.
|
|
Node Failure Protection
Details
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduces the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
1 Replica (2N)
+ Hardware RAID (5 or 6)
HPE SimpliVity 2600 uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated on a designated partner node. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. When a physical node fails, VMs need to be restarted and data is read from the partner node instead. Given the high level of redundancy within and across nodes, the desire to reduce unnecessary I/O, and the fact that most node outages are easily recoverable, node level redundancy is re-established based on a user initiated action.
|
2-way Mirroring (Network RAID-10)
Within a Scale Computing HC cluster all data is written twice to the block storage pool for redundancy (2N). It is equivalent to Network RAID-10, as the two data chunks are placed on separate physical disks of separate physical nodes within the cluster. This protects against 1 disk failure and 1 node failure at the same time, and aggregates the I/O and throughput capabilities of all the individual disks in the cluster (= wide striping).
Once an RSD fails, the system re-mirrors the data using the free space in the HC3 cluster as a hot spare. Because all physical disks contain data, rebuilds are very fast. Scale Computing HC3 is often to detect the deteriorated state of a physical storage device in advance and pro-actively copy data to other devices ahead of an actual failure.
Currently only 1 Replica (2N) can be maintained, as the setting is not configurable for end-users.
|
|
Block Failure Protection
Details
|
Failure Domains
Block failure protection can be achieved by assigning nodes in the same appliance to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
Not relevant (1-node chassis only)
HPE SimpliVity 2600 building blocks are based on 1-node chassis only. Therefore multi-node block (appliance) level protection is not relevant for this solution as Node Failure Protection applies.
|
Not relevant (1U/2U appliances)
|
|
Rack Failure Protection
Details
|
Failure Domains
Rack failure protection can be achieved by assigning nodes within the same rack to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
Group Placement
HPE SimpliVity 2600 intelligent software features include Rack failure protection. Both rack level and site level protection within a cluster is administratively determined by placing hosts into groups. Data is balanced appropriately to ensure that each VM is redundantly stored across two separate groups.
|
N/A
|
|
Protection Capacity Overhead
Details
|
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
RAID5: The stripe size used by vSAN for RAID5 is 3+1 (33% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID5 is 4 nodes.
RAID6: The stripe size used by vSAN for RAID6 is 4+2 (50% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID6 is 6 nodes.
RAID5/6 can only be leveraged in vSAN All-flash configurations because of I/O amplification.
|
Replica (2N) + RAID5: 125-133%
Replica (2N) + RAID6: 120-133%
Replica (2N) + RAID60: 125-140%
The hardware RAID level that is applied depends on drive count in an individual node:
2 drives = RAID1
4-5 drives = RAID5
8-12 drives = RAID6
14-20 drives = RAID60 (2 per RAID6 set)
|
Mirroring (2N) (primary): 100%
|
|
Data Corruption Detection
Details
|
Read integrity checks
Disk scrubbing (software)
End-to-end checksum provides automatic detection and resolution of silent disk errors. Creation of checksums is enabled by default, but can be disabled through policy on a per VM (or virtual disk) basis if desired. In case of checksum verification failures data is fetched from another copy.
The disk scrubbing process runs in the background.
|
Read integrity checks (CLI)
Disk scrubbing (software)
While writing data, checksums are created and stored as part of the inline deduplication process. When one of the underlying layers detects data corruption, a checksum comparison is performed and when required, another copy of the data is used to repair the corrupted copy in order to stay compliant with the configured protection level.
Read integrity checks can be enabled through the CLI.
Disk Scrubbing, termed 'RAID Patrol' by HPE SimpliVity 2600, is a background process that is used to perform checksum comparisons of all data stored within the solution. This way stale data is also verified for corruption.
|
Read integrity checks (software)
Disk scrubbing (software)
The HC3 system performs continuous read integrity checks on data blocks to detect corruption errors. As blocks are written to disk, replica blocks are written to other disks within the storage pool for redundancy. Disk are continuously scrubbed in the background for errors and any corruption found is repaired from the replica data blocks.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
VMware vSAN uses the 'vSANSparse' snapshot format that leverages VirstoFS technology as well as in-memory metadata cache for lookups. vSANSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.
|
Built-in (native)
HPE SimpliVity 2600 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
Traditional snapshots can still be created using the features natively available in the hypervisor platform (eg. VMware Snapshots).
|
Built-in (native)
HyperCore snapshots use a space efficient allocate-on-write methodology where no additional storage is used at the time the snapshot is taken, but as blocks are changed the original content blocks are preserved, and new content written to freshly allocated space on the cluster.
|
|
|
Local
|
Local + Remote
HPE SimpliVity 2600 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
Traditional snapshots can still be created using the features natively available in the hypervisor platform (eg. VMware Snapshots).
|
Local (+ Remote)
Manual snapshots are always created on the source HC3 cluster only and are never deleted by the system.
Without remote replication active on a VM, snapshots created using snapshot schedules are also created on the source HC3 cluster only.
With remote replication active, a snapshot schedule repeatedly creates a VM snapshot on the source cluster and then copies that snapshot to the target cluster, where it is retained for a specified number of minutes/hours/days/weeks/months. The default remote replication frequency of 5 minutes, combined with the default retention of 25 minutes, means that by default 5 snapshots are maintained on the target HC3 cluster at any given time.
A VM can only have one snapshot schedule assigned at a time. However, a schedule can contain multiple recurrence rules. Each recurrence rule consists of a replication snapshot frequency (x minutes/hours/days/weeks/months), an execution time (eg. 12:00AM), and a retention (y minutes/hours/days/weeks).
|
|
Snapshot Frequency
Details
|
GUI: 1 hour
vSAN snapshots are invoked using the existing snapshot options in the VMware vSphere GUI.
To create a snapshot schedule using the Dell EMCnter (Web) Client: Click on a VM, then inside the Monitoring tab select Tasks & Events, Scheduled Tasks, 'Take Snapshots…'.
A single snapshot schedule allows a minimum frequency of 1 hour. Manual snapshots can be taken at any time.
|
GUI: 10 minutes (Policy-based)
CLI: 1 minute
Backups can be scheduled.
Although setting the backup frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
5 minutes
A snapshot schedule allows a minimum frequency of 5 minutes. However, ScaleCare Support recommends no less than every 15 minutes as a general best practice.
|
|
Snapshot Granularity
Details
|
Per VM
|
Per VM
|
Per VM
|
|
|
External (vSAN Certified)
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and was expected to go live in the first half of 2019. vSAN 7.0 also did not introduce native data protection.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
Built-in (native)
HPE SimpliVity 2600 provides native backup capabilities. Its backup feature supports remote-replication, is deduplication aware and data is compressed over the wire.
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
|
Built-in (native)
By combining Scale Computing HC3s native snapshot feature with its native remote replication mechanism, backup copies can be created on remote HC3 clusters.
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
Apart from the native features, Scale Computing HC3 supports any in-guest 3rd party backup agents that are designed to run on Intel-based virtual machines on our supported OS platforms.
|
|
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
Locally
To other SimpliVity sites
To Service Providers
Backup remote-replication is deduplication aware + data is compressed over the wire.
|
Locally
To remote sites
|
|
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
GUI: 10 minutes (Policy-based)
CLI: 1 minute
Although setting the backup frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
5 minutes (Asynchronous)
VM snapshots are created automatically by the replication process as quickly as every 5 minutes (as long as the previous snapshot’s change blocks have been fully replicated to the target HC3 cluster). The remote replication default schedule will take a snapshot every 5 minutes and keep snapshots for 25 minutes.
|
|
Backup Consistency
Details
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
vSphere: File System Consistent (Windows), Application Consistent (MS Apps on Windows)
Hyper-V: File System Consistent (Windows)
HPE SimpliVity 2600 provides the option to enable Microsoft VSS integration when configuring a backup policy or when initiating manual backups using the CLI backup command. This ensures application-consistent backups are created for MS Exchange and MS SQL database environments.
In OmniStack 3.6.1 support was added for VSS on virtual machines running SQL Server 2012/2016 on the Windows Server 2012 R2 operating system.
HPE SimpliVity 2600 also still provides the option to create crash-consistent backups by setting consistency to 'none'.
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
For Windows VMs that require it, VSS snapshot integration is provided in the VIRTIO driver package.
|
|
Restore Granularity
Details
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
vSphere: Entire VM or Single File
Hyper-V: Entire VM
With HPE SimpliVity 2600 snapshots and backups are the same.
|
Entire VM
Although Scale Computing HC3 uses block-storage, the platform is capable of attaining per VM-granularity.
|
|
Restore Ease-of-use
Details
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
Entire VM: GUI, CLI and API
Single File: GUI
Single File restores can be performed entirely from the vSphere Web Client GUI due to the plugin integration
|
Entire VM: Multi-step
Single File: Multi-step
Restoring VMs or single files from HC3 storage snapshots requires a multi-step approach.
For file-level restores a VM snapshot needs to be cloned and mounted so the file can be read from the mount. Cloning and mounting does not alter the original VM snapshot.
|
|
|
|
Disaster Recovery |
|
|
Remote Replication Type
Details
|
Built-in (Stretched Clusters only)
External
VMware vSAN does not have any remote replication capabilities of its own. Stretched Clustering with synchronous replication is the exception.
Therefore in non-stretched setups it relies on external remote replication mechanisms like VMwares free-of-charge vSphere Replication (VR) or any vSphere compatible 3rd party remote replication application (eg. Zerto VR).
vSphere Replication requires the deployment of virtual appliances. No specific integration exists between VMware vSAN and VMware vSphere VR.
As of vSAN 7.0 vSphere Replication objects are visible in the vSAN capacity view. Objects are recognized as vSphere replica type, and space usage is accounted for under the Replication category.
|
Built-in (native)
HPE SimpliVity 2600 provides native DR and replication capabilities.
|
Built-in (native)
All HC3 source and target clusters that will be participating in remote replication must run the same HCOS version. It is possible to replicate between a tiered and non-tiered HC3 cluster.
HC3 remote replication uses network compression by default.
|
|
Remote Replication Scope
Details
|
VR: To remote sites, To VMware clouds
vSAN allows for replication of VMs to a different vSAN cluster on a remote site or to any supported VMware Cloud Service Provider (vSphere Replication to Cloud). This includes VMware on AWS and VMware on IBM Cloud.
|
To remote sites
|
To remote sites
To Google Cloud Platform (GCP)
Network latency between the source and HC3 target clusters should be below 2,000ms (2 seconds).
Scale Computing HC3 Cloud Unity DRaaS: This disaster recovery as a service offering provides an HC3 DR target running securely in Google Cloud Platform (GCP). Workloads can be replicated to the Google cloud for failover or recovery on a per VM basis. HC3 Cloud Unity DRaaS uses L2 networking to provide seamless connectivity between on-prem and remote hosted VMs in the event of failover. HC3 Cloud Unity DRaaS includes ScaleCare support at every stage to assist in setup, testing, failover, and recovery. The service also comes with a runbook to assist with both planning and execution. When needed, all protected VMs can be failed over and running in the cloud and then failed back once the on-prem resources are restored. The Recovery Point Objective (RPO) is 4 hours for recovery of the first VM on GCP.
HC3 Cloud Unity DRaaS requires a monthly subscription that is in part based on GCP resource usage (compute, storage, network).
|
|
Remote Replication Cloud Function
Details
|
VR: DR-site (VMware Clouds)
Because VMware on AWS and VMware on IBM Cloud are full vSphere implementations, replicated VMs can be started and run in a DR-scenario.
|
N/A
HPE SimpliVity 2600 does not support replication to hyperscale public cloud targets (AWS, Azure, GCP) at this time.
|
DR-site (GCP)
All protected VMs can be failed over and running in the cloud and then failed back once the on-prem resources are restored.
Scale Computing HC3 Cloud Unity DRaaS leverages Google Cloud Platform (GCP) as a DR-site. All traffic between the on-premises HC3 environment and GCP utilizes an encrypted connection, authenticated via pre-shared key. Only changed blocks are transmitted. Replicated data remains solely in the zone chosen to run the HC3 Cloud instance in.
Current HC3 Cloud Unity/Google Datacenter Locations:
- United States: Iowa, South Carolina, Oregon
- Canada: Montreal
- Europe: Belgium, London, Frankfurt
|
|
Remote Replication Topologies
Details
|
VR: Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
HPE SimpliVity 2600 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
Scale Computing HC3 supports 1-to-1 replication as well as many-to-1 replication. 1-to-1 replication includes support for cross-replication between two systems, meaning source-to-target and target-to-source. 1-to-many replication means that different VMs from one system can be replicated to different remote systems; with HC3 the same VM cannot be replicated to different remote systems. Many-to-1 means that multiple source systems can replicate VMs to the same target system. A maximum of 25 HC3 source systems can replicate to a single HC3 target system.
|
|
Remote Replication Frequency
Details
|
VR: 5 minutes (Asynchronous)
vSAN: Continuous (Stretched Cluster)
The 'Stretched Cluster' feature is only available in the Enterprise edition.
|
GUI: 10 minutes (Asynchronous)
CLI: 1 minute (Asynchronous)
Continuous (Stretched Cluster)
Although setting the remote replication frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
5 minutes
VM snapshots are created automatically by the replication process as quickly as every 5 minutes (as long as the previous snapshot’s change blocks have been fully replicated to the target HC3 cluster). The remote replication default schedule will take a snapshot every 5 minutes and keep snapshots for 25 minutes.
ScaleCare Support recommends a snapshot frequency of 15 minutes as a general best practice.
|
|
Remote Replication Granularity
Details
|
VR: VM
|
VM
|
VM
Excluding virtual disks from VM is a feature that is still in testing and must currently be done by engaging Support and discussing the options and considerations required.
|
|
Consistency Groups
Details
|
VR: No
Protection is on a per-VM basis only.
|
No
Protection is on a per-VM basis only; no logical groupings of VMs can be created.
|
No
|
|
|
VMware SRM (certified)
VMware Interoperability Matrix shows official support for SRM 8.3.
|
RapidDR (native; VMware only)
NEW
HPE SimpliVity has developed its own DR orchestration software named 'RapidDR'. The current version is RapidDR 3.5.1, released in December 2020. Microsoft Hyper-V and VMware NSX are not supported.
HPE SimpliVity RapidDR includes an intuitive planning guide that allows for the creation of DR workflows in five easy steps. RapidDR provides one-click activation for recovery of all virtualized workloads according to the plan. RapidDR also creates detailed historical reports for compliance audits. Lastly, RapidDR provides the ability to test existing recovery plans without impacting running workloads. This allows an organization to confirm the IT readiness for a disaster by testing recovery plans in advance.
HPE SimpliVity RapidDR v2.0 introduced:
- priority-based parallel recovery of VMs, significantly reducing failover time (up to 80% compared to sequential failover time).
- setting recovery order priority at both the recovery group level and the individual VM level, where the recover group level priority takes precedence.
- proactive assessment of the source site configuration.
- choice out of four recovery actions if VM recovery fails .
HPE SimpliVity RapidDR v2.1 introduced:
- automated failback functionality.
- increased number of recovery plan from 150 VMs to 600 VMs.
- increased number of recovery groups per recovery plan from 20 to 50.
- increased number of VMs in the entire recovery environment from 300 to 1000.
HPE SimpliVity RapidDR v2.5 introduced:
- new and improved user interface with expandable menu options and
intuitive work flows.
- option to validate failover and failback settings (check and
report inconsistencies between the recovery configuration file and the recovery site).
HPE SimpliVity RapidDR v2.5.1 introduced:
- automated PowerCLU configuration during installation
- support for RPO functionality (minimum RPO is 10 minutes)
- simplified VM and Recovery Group settings page
HPE SimpliVity RapidDR v3.0 introduced:
- support for Microsoft Hyper-V hypervisor
- support for 50 VMs per recovery plan in a Hyper-V environment
- Quick Plan Editor, which allows editing of VM login credentials in recovery plans created for VMware
- option to generate Audit Report pdf which contains the entire recovery execution sequence
- significant improvement of recovery times for VMware based recovery plans
- improved error reporting and troubleshooting information
HPE SimpliVity RapidDR v3.0.1 introduced software fixes and no new features.
HPE SimpliVity RapidDR v3.1 introduced:
- encrypted passwords in recovery plans providing enhanced security
- revamped user interface for enhanced user experience
- recovery of Windows guest VMs by using non-administrator user accounts
- Log Mode button to choose a log level of all the RapidDR logs.
- a single-click log collection button for downloading all of the RapidDR logs in zip format
- export/import of recovery configuration settings for VMs from an excel sheet during plan creation or modification
- support for centrally managed federation
HPE SimpliVity RapidDR v3.5.0 introduces:
- NIC specific gateway and DNS to be used during recovery
- validation of guest VMs network settings after it is recovered.
- copying or moving backups from HPE SimpliVity clusters to HPE StoreOnce.
- recovery from external store (HPE StoreOnce appliance) backups
- user intuitive UX features for seamless navigation within and across workflows
- viewing recovery plan details and all plan activity
- Microsoft Hyper-V is no longer supported (!)
HPE SimpliVity RapidDR v3.5.1 introduces:
- support for DVS
- support for CentOS 8 and RHEL 8 guest VMs
- support for creation of VM backups in Test Failover and Test Failback workflows
HPE SimpliVity RapidDR is optional and requires separate software licenses.
Alternatively, DR orchestration can also be built using vRealize Automation (vRA).
|
HC3 Cloud Unity (native)
HC3 Cloud Unity DRaaS is a complete cloud service that provides a Disaster Recovery (DR) runbook outlining DR procedures.
DR testing involves cloning a replicated VM snapshot on a remote cluster and booting the clone.
Cloud Unity DRaaS is available in the following Google Regions:
United States: Iowa, South Carolina, Oregon
Canada: Montreal
Europe: Brussels, London, Frankfurt
HyperCore 8.5.1 introduced a bulk action allowing the cloning of Replication Target VMs.
|
|
Stretched Cluster (SC)
Details
|
VMware vSphere: Yes (certified)
There is read-locality in place preventing sub-optimal cross-site data transfers.
vSAN 7.0 introduces redirection for all VM I/O from a capacity-strained site to the other site, untill the capacity is freed up. This feature improves the uptime of VMs.
|
VMware vSphere: Yes (certified)
HPE SimpliVity is certified by VMware as a VMware Metro Storage Cluster (vMSC) solution. For more information, please view https://kb.vmware.com/s/article/51462
HPE SimpliVity offers the ability to convert existing non-stretched cluster HPE SimpliVity deployments to stretched cluster deployments. Workloads can be automatically distributed among availability zones. Availability Zone management can be performed from the HPE SimpliVity vCenter tab.
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
|
3-sites: two active sites + tie-breaker in 3rd site
NEW
The use of the Stretched Cluster Witness Appliance automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The witness is deployed as a VM within a third site.
vSAN 6.7 introduced the option to configure a dedicated VMkernel NIC for witness traffic. This enhances data security because witness traffic is isolated from vSAN data traffic.
vSAN 7.0 U1 introduces the vSAN Shared Witness. This feature allows end-user organizations to leverage a single Witness Appliance for up to 64 stretched clusters. This is especially useful in scenarios where many edge locations are involved. The size of the Witness Appliance determines the maximum number of cluster and components that can be managed.
|
3-sites: two active sites + tie-breaker in 3rd site (optional)
HPE SimpliVity calls the tie-breaker 'arbiter'. The use of the arbiter automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The arbiter can be either on-premises or hosted as a cloud instance, and is recommended but not a hard requirement. The arbiter is a small Windows service, so it can run about anywhere, as long as it has network access to both active sites. A single arbiter may be shared by multiple clusters in the federation. Installing an additional arbiter for every 4,000 virtual machines helps ensure best performance and distributes workloads.
In VMware vSphere environments the arbiter is a hard requirement for 2-node clusters and stretched cluster configurations. Furthermore, HPE recommends using the arbiter in non-stretched cluster configurations with 4 HPE OmniStack hosts.
Currently for Microsoft Hyper-V environments (2-node clusters) the arbiter is a hard requirement.
The HPE OmniStack 3.7.9 version of Arbiter introduced support for clusters with hosts that use HPE OmniStack 3.7.8 and other clusters with hosts that use HPE OmniStack 3.7.9. The federation can contain a mix of clusters with those two versions. However, all the hosts within the same cluster must use the same version.
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
|
<=5ms RTT
|
<=5ms RTT
RTT = Round Trip Time
A RTT of
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
|
<=15 nodes at each active site
For Dell EMC VxRail the minimum stretched cluster configuration is 3+3+1, meaning 3 nodes on the first site, 3 nodes on the second site and 1 tie-breaker VM on a third site. The maximum stretched cluster configuration is 15+15+1, meaning 15 nodes on the first site, 15 nodes on the second site and 1 tie-breaker VM on a third site.
|
<=8 hosts at each active site
HPE SimpliVity 2600 allows up to 16 nodes to be placed across two datacenters.
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
SC Data Redundancy
Details
|
Replicas: 0-3 Replicas (1N-4N) at each active site
Erasure Coding: RAID5-6 at each active site
VMware vSAN 6.6 and up provide enhanced stretched cluster availability with Local Fault Protection. You can provide local fault protection for virtual machine objects within a single site in a stretched cluster. You can define a Primary level of failures to tolerate for the cluster, and a Secondary level of failures to tolerate for objects within a single site. When one site is unavailable, vSAN maintains availability with local redundancy in the available site.
In the case of stretched clustering, selecting 0 replicas means that there is only one instance of the data available at each of the active sites.
Local Fault Protection is only available in the Enterprise edition of vSAN.
|
Replicas: 1N at each active site
+ Hardware RAID (5, 6 or 60)
In the case of stretched clustering, 1N means that there is only one instance of the data available at each of the active sites.
With hardware RAID (5 or 6) implemented, data is protected across cluster nodes within each active site.
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
|
Data Services
|
Score:58.6% - Features:29
- Green(Full Support):12
- Amber(Partial):10
- Red(Not support):7
- Gray(FYI only):0
|
Score:46.6% - Features:29
- Green(Full Support):11
- Amber(Partial):5
- Red(Not support):13
- Gray(FYI only):0
|
Score:43.1% - Features:29
- Green(Full Support):9
- Amber(Partial):7
- Red(Not support):13
- Gray(FYI only):0
|
|
|
|
Efficiency |
|
|
Dedup/Compr. Engine
Details
|
All-Flash: Software
Hybrid: N/A
Deduplication and compression are only available in the VxRail All-Flash ('F') appliances.
|
Software
HPE SimpliVity 2600 utilizes software-optimized acceleration for deduplication and compression.
|
Software
Scale Computing HC3 is able to leverage native background data deduplication to reduce the physical space occupied by virtual disks.
The storage details available in the HC3 Web interface provide information on efficiency gains resulting from deduplication.
|
|
Dedup/Compr. Function
Details
|
Efficiency (Space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
Efficiency and Performance
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
HPE SimpliVity 2600 focusses on both aspects.
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
|
Dedup/Compr. Process
Details
|
All-Flash: Inline (post-ack)
Hybrid: N/A
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
|
Deduplication: Inline (on-ack)
Compression: Inline (on-ack)
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack).
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
|
Post-Processing
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
The Scale Computing HC3 data deduplication feature is considered a post-process implementation that works with existing background processes to identify duplicate 1 MiB blocks of data on a given physical disk. The process leverages the SCRIBE metadata reference count mechanism by finding independently written blocks that are the same. This duplicate review is for each physical disk on a given node to ensure as little a footprint as possible while providing all of the benefits of full deduplication.
The deduplication process is broken into two steps. The first step reviews VM data blocks by creating a hash index of each block and storing the hash in the nodes RAM. The hashing algorithm will be able to scan the system data for deduplication candidates at roughly 1 MiB/s of data on HDDs and 4 MiB/s of data on SSDs, both of these estimates per node. The second process occurs during low system utilization. The system will work through the queue of hashed blocks in RAM. It will search for matching hashes until the background disk scan regenerates them. When the process finds two blocks with a matching hash it will verify the underlying blocks are in fact duplicates before incrementing the reference count in metadata on the block it is planning to free. Updating the metadata count for the block essentially releases the space of the duplicate block. The block then returns to the system’s free storage pool. This secondary process can progress much faster than 1 MiB/s; the speed is dependent on the current system load.
The SCRIBE metadata reference count mechanism is the same architecture utilized by snapshots and clones in SCRIBE to allow quick, efficient, low-impact thin-provisioning on the HC3 system. Shared blocks are referenced and a count to the block stored in the metadata.
SCRIBE = Scale Computing Reliable Independent Block Engine
|
|
Dedup/Compr. Type
Details
|
All-Flash: Optional
Hybrid: N/A
NEW
Compression occurs after deduplication and just before the data is actually written to the persisent data layer.
In vSAN 7.0 U1 and onwards there are three settings to choose from: 'None', 'Compression Only' or 'Deduplication'.
When choosing 'Compression only' deduplication is effectively disabled. This optimizes storage performance, resource usage as well as availability. When using 'Compression only' a single disk failing no longer impacts the entire disk group.
|
Always-on
HPE SimpliVity 2600s data deduplication and compression features are always on and cannot be disabled as it is an integral component of the platform architecture providing both performance and efficiency. It also provides end-user simplicity.
|
Always-on
By default Scale Computing HC3 data deduplication is turned on. The platform has been designed to prioritize running workloads over the deduplication tasks to prevent any negative performance impact. As such, the process piggybacks on pre-existing background structures - such as the background disk scrubber - for the hashing process.
|
|
Dedup/Compr. Scope
Details
|
Persistent data layer
Deduplication and compression are not used for optimizing read/write cache.
|
All data (memory-, flash- and persistent data layers)
Each node maintains its own deduplication database in order to preserve data availability when a node in the Federation fails.
|
Persistent data layer
|
|
Dedup/Compr. Radius
Details
|
Disk Group
Deduplication and compression is a cluster-wide setting and is performed within each disk group. Redundant copies of a block within the same disk group are reduced to one copy. However, redundant blocks across multiple disk groups are not deduplicated.
|
Federation
HPE SimpliVity 2600 inline deduplication works globally, which means that deduplication happens across the entire data set within a federation (a single federation consists of on or multiple clusters). The data set includes primary data as well as backup copies.
HPE SimpliVity 2600 inline deduplication also works across sites. This means that SimplVity OmniStack will talk to the other site and only send changed blocks.
As HPE SimpliVity 2600 inline deduplication and compression that is performed by the software provides essential value and does not incur any significant performance penalty, it cannot be turned off.
|
Per Node
Scale Computing HC3 data deduplication works on a per node basis. All blocks that are directly written or replicated from another node are deduplicated by the indiviual node independent from other nodes within the same cluster. This way data integrity is ensured for every single node.
|
|
Dedup/Compr. Granularity
Details
|
4 KB fixed block size
vSANs deduplication algorithm utilizes a 4K-fixed block size.
|
4-8 KB variable block size
HPE SimpliVity deduplication uses 4KB - 8KB variable block segments.
|
1 MiB
Scale Computing HC3 post-process data deduplication uses 1 MiB fixed block segments.
|
|
Dedup/Compr. Guarantee
Details
|
N/A
Enabling deduplication and compression can reduce the amount of storage consumed by as much as 7x (7:1 ratio).
|
90% (10:1) capacity savings across storage and backup combined
Capacity space savings are due to deduplication+compression and include both storage and backups.
90% savings is the equivalent of 10:1 efficiency in the Datacenter panel in the HPE SimpliVity tab within the vSphere Client. Efficiency is calculated across all HPE SimpliVity systems in a VMware Datacenter. It’s the ratio of storage capacity that would have been used on a comparable traditional storage solution to the physical storage that is actually used in the HPE SimpliVity hyperconverged infrastructure. ‘Comparable traditional solutions’ are storage systems that provide VM-level synchronous replication for storage and backup and do not include any deduplication or compression capability.
The savings/efficiency are based on the assumption that you configure a backup policy to take at least one HPE SimpliVity backup per day of every virtual machine on every HPE SimpliVity system in a given VMware Datacenter with those backups retained for 30 days. If backups are performed more frequently and/or retained for a longer period, you will enjoy even greater efficiency. The data change rate is assumed to be up to 5% per day with up to 30% growth rate of the data over a duration of 30 contiguous days.
|
N/A
|
|
|
Full
Data can be redistributed evenly across all nodes in the cluster when a node is either added or removed.
For VMware vSAN data redistribution happens is two ways:
1. Automated: when physical disks are between 30% and 80% full and a node is added to the vSAN cluster, a health alert is generated that allows the end-user to execute an automated data rebalancing run. For this VMware uses the term 'proactive'.
2. Automatic: when physical disks are more than 80% full, vSAN executes an data rebalancing fully automatically, without requiring any user intervention. For this VMware uses the term 'reactive'.
As data is written, all nodes in the cluster service RF copies even when no VMs are running on the node which ensures data is being distributed evenly across all nodes in the cluster.
VMware vSAN 6.7 U3 includes proactive rebalancing enhancements. All rebalancing activities can be automated with cluster-wide configuration and threshold settings. Prior to this release, proactive rebalancing was manually initiated after being alerted by vSAN health checks.
|
Partial
Data that is already present before adding a node is not rebalanced across all nodes within a Federation. This is in accordance with HPE SimpliVity 2600s data locality strategy. However, data is automatically rebalanced across all nodes before removing/evicting a node from a Federation.
Data can be rebalanced at any time, but currently requires a HPE SimpliVity 2600 Support engagement. Some customers have regular, support-initiated cadences for rebalancing.
|
Full (optional)
Data rebalancing needs to be initiated manually by the end-user. It depends on the specific use case and end-user environment if this makes sense.
|
|
|
N/A
The VMware vSAN storage architecture does not include multiple persistent storage layers, but rather consist of a caching layer (fastest storage devices) and a persistent layer (slower/most cost-efficient storage devices).
|
N/A
The HPE SimpliVity 2600 storage architecture is based on a single storage layer (SSD) and hence does not include multiple persistent storage layers to distribute data across.
|
Yes
Scale Computing HC3s HyperCore Enhanced Automated Tiering (HEAT) is an extension of the SCRIBE storage layer that is available to HC3 hybrid clusters with 3 nodes or more.
HEAT allows virtual disk level, priority data placement for allocated data (actual consumed capacity on a VM virtual disk). This is accomplished through a real-time heat map of virtual disk I/O in order to “tier” the data. Data blocks that are “hot” (=accessed regularly by the virtual disk) are stored at the SSD level while “colder” data blocks are stored at the HDD level.
There are 4 basic HEAT principles:
1. All VMs have access to SSDs, no matter what node the VM may actually be running on.
2. SSDs are additional capacity for VM disks (subvirtual tiering), not a cache for system data.
3. Administrators have granular control of SSD access at the VM virtual disk level.
4. Administrators are able to mix and match Tiered HC3 nodes with standard HC3 nodes and Storage Only nodes without any extra work or requirements.
The HC3 web interface provides an easy-to-use slide bar on the property page of an individual virtual disk in order to set the flash priority level of a VM’s virtual disk data:
0 Off
1 Minimum
2 Very Low
3 Low
4 Normal (default)
5 High
6 Very High
7 Extreme
8 Absurd
9 Hyperspeed
10 Ludicrous Speed
11 These go to 11
When the flash priority level is set to 0, no data on the virtual disk ever gets promoted to the SSD layer. When the flash priority level is set to 11, all data on the virtual disk is promoted to the SSD layer.
Altering HEAT priority will effect all VM virtual disks within the HC3 cluster. Each increase in flash priority access will dedicate roughly twice as much flash capacity for the VM virtual disk, and consequently reduce the flash capacity available for other VM virtual disks on the system.
|
|
|
|
Performance |
|
|
|
vSphere: Integrated
VMware vSAN is an integral part of the VMware vSphere platform and as such is not a separate storage platform.
VMware vSAN 6.7 adds TRIM/UNMAP support: space that is no longer used can be automatically reclaimed, reducing the overall capacity needed for running workloads.
|
vSphere: VMware VAAI-NAS (Limited)
GUI integrated tasks/commands
HPE SimpliVity 2600 is qualified for: File Cloning.
HPE SimpliVity 2600 offers an alternative task/command set through the vSphere management interface that provide instant offloading.
|
KVM: IOVirt
|
|
|
IOPs Limits (maximums)
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
The vSAN software inside VxRail currently supports only the first method and focusses on IOPs. 'MBps Limits' cannot be set. It is also not possible to guarantee a certain amount of IOPs for any given VM.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
HPE SimpliVity 2600 currently does not offer any QoS mechanisms.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Scale Computing HC3 currently does not offer any QoS mechanisms.
|
|
|
Per VM/Virtual Disk
Quality of Service (QoS) for vSAN is normalized to a 32KB block size, and treats reads the same as writes.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
HPE SimpliVity 2600 currently does not offer any QoS mechanisms.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Scale Computing HC3 currently does not offer any QoS mechanisms.
|
|
|
Cache Read Reservation: Per VM/Virtual Disk
With VxRail the Cache Read Reservation policy for a particular VM can be set to 100% to allow all data to also exist entirely on the flash layer. The difference with Nutanix 'VM Flash Mode' is, that with 'VM Flash Mode' persistent data of the VM resides on Flash and is never destaged to spinning disks. In contrast, with VxRails Cache Read Reservation data exists twice: one instance on persistent magnetic disk storage and one instance within the SSD read cache.
|
Not relevant (All-Flash only)
The HPE SimpliVity 2600 platform is not available as a hybrid (flash+magnetic) configuration and as such has no need for a ' Flash Pinning' feature.
|
Per virtual disk
Scale Computing HC3s native HEAT feature allows for data of an individual virtual disk to reside completely in flash storage. This can be administered on-the-fly by setting the Flash priority in the virtual disks properties to 11. The new HEAT priority setting will be immediately activated on the VMs virtual disk.
For more information on HEAT please view the information provided with the 'Data Tiering' capability.
HEAT = HyperCore Enhanced Automated Tiering
|
|
|
|
Security |
|
|
Data Encryption Type
Details
|
Built-in (native)
|
N/A
Smart array controller encryption is currently not available in HPE Apollo Gen10 server hardware. Adding data encryption capabilities requires 3rd party security software.
|
N/A
Although the design of the SCRIBE storage management layer provides some general protection for data stored on a
single hard drive, it is not the same as data encryption. If data encryption is required it is recommended to use in-guest encryption tools to ensure data protection.
|
|
Data Encryption Options
Details
|
Hardware: N/A
Software: vSAN data encryption; HyTrust DataControl (validated)
NEW
Hardware: vSAN does no longer support self-encrypting drives (SEDs).
Software: vSAN supports native data-at-rest encryption of the vSAN datastore. When encryption is enabled, vSAN performs a rolling reformat of every disk group in the cluster. vSAN encryption requires a trusted connection between vCenter Server and a key management server (KMS). The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard. In contrast, vSAN native data-in-transit encryption does not require a KMS server. vSAN native data-at-rest and data-in-transit encryption are only available in the Enterprise edition.
vSAN encryption has been validated for the Federal Information Processing Standard (FIPS) 140-2 Level 1.
VMware has also validated the interoperability of HyTrust DataControl software encryption with its vSAN platform.
|
Hardware: N/A
Software: HyTrust DataControl (validated); Vormetric VTE (validated)
Hardware: N/A
Software: HPE SimpliVity has validated the interoperability of HyTrust DataControl as well as Vormetric Transparant Encryption (VTE) software encryption with its OmniStack platform.
Currently HPE SimpliVity 2600 does not provide native software-based data-at-rest encryption.
|
Hardware: N/A
Software: HyTrust KeyControl + Client (validated); WinMagic SecureDoc CloudVM (validated)
Hardware: N/A
Software: Scale Computing partners with HyTrust to encrypt the drives of Windows and Linux VMs running on a HC3 system. The HyTrust client software that is installed on all VMs that require encryption, can encrypt both boot drives and data drives. Scale Computing has also validated the interoperability of WinMagic SecureDoc CloudVM for encryption of drives of Windows VMs with its HC3 platform.
|
|
Data Encryption Scope
Details
|
Hardware: N/A
Software vSAN: Data-at-rest + Data-in-transit
Software Hytrust: Data-at-rest + Data-in-transit
NEW
Hardware: N/A
Software: VMware vSAN 7.0 U1 encryption provides enhanced security for data on a drive as well as data in transit. Both are optional and can be enabled seperately. HyTrust DataControl encryption also provides encryption for data-at-rest and data-in-transit.
|
Hardware: N/A
Software: Data-at-rest + Data-in-transit
Hardware: N/A
Software: HyTrust and Vormetric encryption solutions do provide both encryption for data-at-rest and encryption for data-in-transit.
|
Hardware: N/A
Software: Data-at-rest
|
|
Data Encryption Compliance
Details
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (HyTrust, VTE)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (HyTrust; WinMagic)
|
|
Data Encryption Efficiency Impact
Details
|
Hardware: N/A
Software: No (vSAN); Yes (HyTrust)
Hardware: N/A
Software vSAN: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software Hytrust: Because HyTrust DataControl is an end-to-end solution, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
Hardware: N/A
Software: Yes
Hardware: N/A
Software: Because HyTrust and Vormetric are end-to-end solutions, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
Hardware: N/A
Software: Yes (HyTrust; WinMagic)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
|
|
|
Test/Dev |
|
|
|
No
Dell EMC VxRail does not include fast cloning capabilities.
Cloning operations actually copy all the data to provide a second instance. When cloning a running VM on VxRail, all the VMDKs on the source VM are snapshotted first before cloning them to the destination VM.
|
Yes
The cloning process takes advantage of the global deduplication and compression.
|
Yes
Scale Computing HC3 leverages block reference counting to avoid having to copy blocks of data when creating a clone of a virtual machine. Because block reference counting is integrated in both the storage protocol as well as the RSDs, it is very fast and eliminates a round-trip when performing copy-on-write actions.
The clone feature on a HC3 cluster will create an identical VM to the parent, but with its own unique name and description. The clone VM will be completely independent from the parent VM once it is created.
|
|
|
|
Portability |
|
|
Hypervisor Migration
Details
|
Hyper-V to ESXi (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
Microsoft Virtual Machine Converter (MVMC) supports conversion of VMware VMs and vdisks to Hyper-V VMs and vdisks. It is also possible to convert physical machines and disks to Hyper-V VMs and vdisks.
MVMC has been offcially retired and can only be used for converting VMs up to version 6.0.
Microsoft System Center Virtual Machine Manager (SCVMM) 2016 also supports conversion of VMs up to version 6.0 only.
|
HC3 Move
HC3 Move is powered by Carbonite (formerly Double-Take) and allows the migration of physical or virtual Windows and Linux-based server workloads with real-time replication and zero-downtime.
HC3 Move requires the purchase of a one-time-use license per server that needs to be migrated to the Scale Computing HC3 platform.
HC3 Move does not support desktop operating systems.
|
|
|
|
File Services |
|
|
|
Built-in (native)
External (vSAN Certified)
NEW
vSAN 7.0 U1 has integrated file services. vSAN File Services leverages scale-out architecture by deploying an Agent/Appliance VM (OVF templates) on individual ESXi hosts. Within each Agent/Appliance VM a container, or “protocol stack”, is running. The 'protocol stack' creates a file system that is spread across the VMware vSAN Virtual Distributed File System (VDFS), and exposes the file system as an NFS file share. The file shares support NFSv3, NFSv4.1, SMBv2.1 and SMBv3 by default. A file share has a 1:1 relationship with a VDFS volume and is formed out of vSAN objects. The minimum number of containers that need to be deployed is 3, the maximum 32 in any given cluster. vSAN 7.0 Files Services are deployed through the vSAN File Service wizard.
vSAN File Services currenty has the following restrictions:
- not supported on 2-node clusters,
- not supported on stretched clusters,
- not supported in combination with vLCM (vSphere Lifecycle Manager),
- it is not supported to mount the NFS share from your ESXi host,
- no integration with vSAN Fault Domains.
The alternative to vSAN File Services is to provide file services through Windows guest VMs (SMB) and/or Linux guest VMs (NFS) on top of vSAN. These file services can be made highly available by using clustering techniques.
Another alternative is to use virtual storage appliances from a third-party to host file services on top of vSAN. The following 3rd party File Services partner products are certified with vSAN 6.7:
- Cohesity DataPlatform 6.1
- Dell EMC Unity VSA 4.4
- NetApp ONTAP Select vNAS 9.5
- Nexenta NexentaStor VSA 5.1.2 and 5.2.0VM
- Panzura Freedom Filer VSA 7.1.9.3
However, none of the mentioned platforms have been certified for vSAN 7.0 or 7.0U1 (yet).
|
N/A
HPE SimpliVity 2600 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Compatibility
Details
|
Windows clients
Linux clients
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
N/A
HPE SimpliVity 2600 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Interconnect
Details
|
SMB
NFS
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
N/A
HPE SimpliVity 2600 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Quotas
Details
|
Share Quotas
vSAN 7.0 File Services supports share quotas through the following settings:
- Share warning threshold: When the share reaches this threshold, a warning message is displayed.
- Share hard quota: When the share reaches this threshold, new block allocation is denied.
|
N/A
HPE SimpliVity 2600 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Analytics
Details
|
Partial
vSAN 7.0 File Services provide some analytics capabilities:
- Amount of capacity consumed by vSAN File Services file shares,
- Skyline health monitoring with regard to infrastructure, file server and shares.
|
N/A
HPE SimpliVity 2600 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
|
|
Object Services |
|
|
Object Storage Type
Details
|
N/A
Dell EMC VxRail does not provide any object storage serving capabilities of its own.
|
N/A
HPE SimpliVity 2600 does not provide any object storage serving capabilities of its own.
|
N/A
Scale Computing HC3 does not provide any object storage serving capabilities of its own.
|
|
Object Storage Protection
Details
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
N/A
HPE SimpliVity 2600 does not provide any object storage serving capabilities of its own.
|
N/A
Scale Computing HC3 does not provide any object storage serving capabilities of its own.
|
|
Object Storage LT Retention
Details
|
N/A
Dell EMC VxRail does not provide any object storage serving capabilities of its own.
|
N/A
HPE SimpliVity 2600 does not provide any object storage serving capabilities of its own.
|
N/A
Scale Computing HC3 does not provide any object storage serving capabilities of its own.
|
|
|
Management
|
Score:85.7% - Features:14
- Green(Full Support):11
- Amber(Partial):2
- Red(Not support):1
- Gray(FYI only):0
|
Score:85.7% - Features:14
- Green(Full Support):11
- Amber(Partial):2
- Red(Not support):1
- Gray(FYI only):0
|
Score:78.6% - Features:14
- Green(Full Support):10
- Amber(Partial):2
- Red(Not support):2
- Gray(FYI only):0
|
|
|
|
Interfaces |
|
|
GUI Functionality
Details
|
Centralized
Management of the vSAN software, capacity monitoring, performance monitoring and efficiency reporting can be performed through the vSphere Web Client interface.
Other functionality such as backups and snapshots are also managed from the vSphere Web Client Interface.
|
Centralized
OmniStack management, capacity monitoring, performance monitoring and efficiency reporting is performed through the vSphere Web Client interface.
The HPE SimpliVity HTML5 Plug-in for vSphere Client supports all of the HPE SimpliVity features.
HPE Simplvity OmniStack 4.0.0 introduces a roles based access control (RBAC) structure that allows defining users that can perform crash consistent backups, search for, and restore the crash consistent backups when using the HPE OmniStack REST API or the HPE SimpliVity Plug-in.
HPE SimpliVity OmniStack 4.0.0 also introduces a new, centralized federation management type. This optional management type supports high-scale configurations. The centrally managed federation includes a virtual machine called the Management Virtual Appliance. The Management Virtual Appliance is a dedicated virtual machine that provides centralized management and coordination of operations across
clusters of HPE SimpliVity hosts for high-scale configurations. In a centrally managed federation, up to 96 HPE SimpliVity hosts can be deployed in a single vCenter Server.
|
Centralized
Scale Computing HC3 management, capacity monitoring, performance monitoring and efficiency reporting is performed through the HC3 HTML5 web-based user interface.
|
|
|
Single-site and Multi-site
Centralized management of multicluster environments can be performed through the vSphere Web Client by using Enhanced Linked Mode.
Enhanced Linked Mode links multiple Dell EMCnter Server systems by using one or more Platform Services Controllers. Enhanced Linked Mode enables you to log in to all linked Dell EMCnter Server systems simultaneously with a single user name and password. You can view and search across all linked Dell EMCnter Server systems. Enhanced Linked Mode replicates roles, permissions, licenses, and other key data across systems. Enhanced Linked Mode requires the Dell EMCnter Server Standard licensing level, and is not supported with Dell EMCnter Server Foundation or Dell EMCnter Server Essentials.
|
Single-site and Multi-site
Centralized management of one or multiple Federations is performed from a single dasboard. This means that global implementations with multiple sites in multiple countries can be easily managed.
|
Single-site and Multi-site
Up to 25 clusters can be manage centrally using the Scale Computing HC3 web-based user interface.
|
|
GUI Perf. Monitoring
Details
|
Advanced
NEW
Performance information can be viewed on the cluster level, the Host level and the VM level. Per VM there is also a view on backend performance. Performance graphs focus on IOPS, MB/s and Latency of Reads and Writes. Statistics for networking, resynchronization, and iSCSI are also included.
End-users can select saved time ranges in performance views. vSAN saves each selected time range when end-users run a performance query.
There is also a VMware vRealize Operations (vROps) Management Pack for vSAN that provides additional options for monitoring, managing and troubleshooting vSAN.
The vSphere 6.7 Client includes an embedded vRealize Operations (vROps) plugin that provides basic vSAN and vSphere operational dashboards. The vROps plugin does not require any additional vROps licensing. vRealize Operations within vCenter is only available in the Enterprise and Advanced editions of vSAN.
vSAN observer as of vSAN 6.6 is deprecated but still included. In its place, vSAN Support Analytics is provided to deliver more enhanced support capabilities, including performance diagnostics. Performance diagnostics analyzes previously executed benchmark tests. It detects issues, suggests remediation steps, and provides supporting performance graphs for further insight. Performance Diagnostics requires participation in the Customer Experience Improvement Program (CEIP).
vSAN 6.7 U3 introduced a vSAN CPU metric through the performance service, and provides a new command-line utility (vsantop) for real-time performance statistics of vSAN, similar to esxtop for vSphere.
vSAN 7.0 introduces vSAN Memory metric through the performance service and the API for measuring vSAN memory usage.
vSAN 7.0 U1 introduces vSAN IO Insight for investigating the storage performance of individual VMs. vSAN IO Insight generates the following performance statistics which can be viewed from within the vCenter console:
- IOPS (read/write/total)
- Throughput (read/write/total)
- Sequential & Random Throughput (sequential/random/total)
- Sequential & Random IO Ratio (sequential read IO/sequential write IO/sequential IO/random read IO/random write IO/random IO)
- 4K Aligned & Unaligned IO Ratio (4K aligned read IO/4K aligned write IO/4K aligned IO/4K unaligned read IO/4K unaligned write IO/4K unaligned IO)
- Read & Write IO Ratio (read IO/writeIO)
- IO Size Distribution (read/write)
- IO Latency Distribution (read/write)
|
Advanced
HPE SimpliVity 2600 provides out-of-the-box performance monitoring functionality through the VMware vSphere Web Client plug-in. Performance metrics can be viewed at the datacenter, cluster and VM level. Performance graphs focus on IOPS (#), Throughput (MB/s) and Latency (ms), both for Reads and Writes. The GUI allows for adjusting the timescale from minutes to years in multiple steps.
|
Basic
|
|
|
VMware vSphere Web Client (integrated)
VxRail 4.7.300 adds full native vCenter plug-in support for all core day to day operations including physical views and Life Cycle Management (LCM).
VxRail 4.7.100 and up provide a VxRail Manager Plugin for VMware vCenter. The plugin replaces the VxRail Manager web interface. Also full VxRail event details are presented in vCenter Event Viewer.
vSAN 6.7 provides support for the HTML5-based vSphere Client that ships with vCenter Server. vSAN Configuration Assist and vSAN Updates are available only in the vSphere Web Client.
vSAN 6.6 and up provide integration with the vCenter Server Appliance. End-users can create a vSAN cluster as they deploy a vCenter Server Appliance, and host the appliance on that cluster. The vCenter Server Appliance Installer enables end-users to create a one-host vSAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the vSAN cluster.
vSAN 6.6 and up also support host-based vSAN monitoring. This means end-users can monitor vSAN health and basic configuration through the ESXi host client. This also allows end-users to correct configuration issues at the host level.
|
VMware vSphere HTML5 Client (plugin)
The HPE SimpliVity OmniStack 3.7.10 HTML5 Plug-in for vSphere Client supports all of the HPE SimpliVity features. The HPE SimpliVity Plug-in for vSphere Web Client (Flex) is no longer supported.
HPE SimpliVity OmniStack 3.7.7 introduced plug-in support for vSphere HTML5 Client with the following functionality:
- View backup policies in a federation
- View cluster storage efficiency
- View virtual machines in a cluster
- View datastores in a cluster
- View hosts in a cluster
- View virtual machines and templates on a host
- View virtual machines or templates assigned to a backup policy
- View datastores assigned to a backup policy
- Create a backup policy
- Set a backup policy for datastore
HPE SimpliVity OmniStack 3.7.8 introduced the following expanded functionality for the vSphere HTML5 Client plug-in:
- Create backup policy with settings for backup days, backup type, and server start and stop times
- Rename backup policy and change or delete rules
- Delete a backup policy
- Identify space savings and see alerts when cluster storage reaches 20% of free space
- View cluster performance (IOPS, MBps, Latency)
- Enable or disable HPE OmniWatch for the federation
- Set proxy server for HPE OmniWatch agent
HPE SimpliVity OmniStack 3.7.9 introduced the following expanded functionality for the vSphere HTML5 Client plug-in:
- Access details on the federation through the following views: HPE SimpliVity Federation Home, Connected Clusters (topology), Backup Limits, Support Capture Monitor, and About
- Access options to view hardware components, create a Support Capture file, shut down the Virtual Controller to safely shut down an HPE OmniStack host, remove a host, share an HPE SimpliVity datastore with a standard ESXi host, and delete a datastore
- Calculate unique backup size
- Rename a backup
- Copy a backup to another cluster
|
Not relevant (Unified interface)
Because Scale Computing HC3 controls the entire Hyperconvergence stack (hypervisor, compute, storage), the HC3 web-based user interface provides all the required management functionality.
|
|
|
|
Programmability |
|
|
|
Full
Storage Policy-Based Management (SPBM) is a feature of vSAN that allows administrators to create storage profiles so that virtual machines (VMs) dont need to be individually provisioned/deployed and so that management can be automated. The creation of storage policies is fully integrated in the vSphere GUI.
|
Partial (Protection)
HPE SimpliVity 2600s VM-level management significantly reduces administrative overheads and the consumption of system resources by allowing policies for functions such as replication and backup to be specified for both individual VMs and groups of VMs.
|
Full
Scale Computing HC3 leverages Storage Policy-Based Management (SPBM) that allows administrators to build a profile for each VM with regard to protection and for each virtual disk with regard to data tiering.
|
|
|
REST-APIs
Ruby vSphere Console (RVC)
PowerCLI
Dell EMC VxRail 4.7.100 and up include RESTful API enhancements.
Dell EMC VxRail 4.7.300 and up provide full VxRail manager functionality using RESTful API.
|
REST-APIs
XML-APIs
PowerShell (community supported)
CLI
HPE SimpliVity 2600 provides an extensive REST-APIs command set that can be used to automate many operational activities.
HPE OmniStack REST API v1.13 changes GET /hosts and GET /virtual_machines functionality.
HPE OmniStack REST API v1.6 extended functionality by adding new fields to several object types.
vRealize Operations for example can make use of the XML-API to automate operational tasks.
|
REST-APIs
Apache Thrift
Python executables
Both Scale Computing end-users and ecosystem partners can programmatically manage the HC3 platform by using either REST-APIs, Apache Thrift and/or Python executables.
|
|
|
OpenStack
VMware vRealize Automation (vRA)
OpenStack integration is achieved through VMware Integrated OpenStack v2.0
vRealize Automation 7 integration is achieved through vRA 7.0.
|
VMware vRealize Automation (vRA)
Cisco UCS Director
There is no OpenStack support for HPE SimpliVity 2600.
|
N/A
Scale Computing HC3 does not provide tight integration with either OpenStack or any automation/orchestration platforms.
|
|
|
N/A (not part of VxRail license bundle)
VMware vSAN does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging VMware vRealize Automation (vRA). This requires a separate VMware license.
|
N/A
HPE SimpliVity 2600 does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging VMware vRealize Automation (vRA). This requires a separate VMware license. HPE SimpliVity 2600 officially supports vRealize Automation (vRA) and vRealize Orchestration (vRO) through a reference architecture.
|
Partial
The Scale Computing HC3 GUI offers delegated administration to secondary users through Role-based Access Control (RBAC). The user access level can be changed to “Admin” with full administrator access or customized with a variety of role options that fall in between Read-only and Admin.
The following optional functional roles that represent groupings of funtional tasks, can be assigned:
- Backup - Clone, Export, Import, Add/Pause Replication to a VM, Create/Delete snapshots, and Create/Delete/Modify snapshot schedules.
- Cluster Settings - Create/Modify all settings within Control Center, except for User Management and Control (system/cluster shutdown).
- Cluster Shutdown - Shutdown the system/cluster and any running VMs.
- VM Create/Edit - Import VMs and Create, Modify, Clone, and Add/Modify VM block (virtual disk) and network devices.
- VM Delete - Delete VMs and their associated snapshots and devices.
- VM Power Controls - Start, Shutdown/Power Off, and Live Migrate VMs.
A user can be assigned any number of these optional roles.
|
|
|
|
Maintenance |
|
|
|
Partially Distributed
For a number of features and functions the vSAN software inside the VxRail appliances relies on other components that need to be installed and upgraded next to the core vSphere platform. Examples are Avamar Virtual Edition (AVE), vSphere Replication (VR) and RecoverPoint for VMs (RPVM). As a result some dependencies exist with other software.
|
Unified
A few minor components aside (eg. plugin, SRA), all storage related features and functionality are built into the HPE SimpliVity 2600 platform. This type of consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
|
Unified
All storage related features and functionality are built into the Scale Computing HC3 platform. The consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
|
|
SW Upgrade Execution
Details
|
Rolling Upgrade (1-by-1)
The Dell EMC VxRail Manager software provides one-click, non-disruptive patches and upgrades for the entire solution stack.
End-user organizations no longer need professional services for stretched cluster upgrades when upgrading from VxRail 4.7.300 to the next release.
vSAN 7.0 native File Services upgrades are also performed on a rolling basis. The file shares remain accessible during the upgrade as file server containers running on the virtual machines which are undergoing upgrade fail over to other virtual machines. During the upgrade some interruptions might be experienced while accessing the file shares.
|
Rolling Upgrade (1-by-1)
OmniStack upgrades are initiated against the Federation and are completed in an automated, rolling fashion. OmniStack Accelerator Card upgrades are included in the OmniStack upgrade process. Reboots of a host are only required if there is an associated OmniStack Accelerator Card firmware update. The updating of the firmware only happens occasionally.
HPE SimpliVity 380 provides a Fast Upgrade Manager that manages the entire upgrade process, from detection to execution.
HPE SimpliVity 380 OmniStack 3.7.7 provides Upgrade Manager support for simulteaneously upgrading hosts in a cluster when the hosts do not have powered-on guest VMs, accelerating the upgrade process.
HPE SimpliVity 380 OmniStack 3.7.9 provided Upgrade Manager support for upgrading HPE OmniStack software at the cluster level. This feature only works when upgrading from HPE OmniStack 3.7.8 to HPE OmniStack 3.7.9.
|
Rolling Upgrade (1-by-1)
Scale Computing provides one-click software and firmware upgrades of HC3 nodes that typically takes minutes to complete, while all VMs remain online during the entire upgrade procedure.
|
|
FW Upgrade Execution
Details
|
1-Click
The Dell EMC VxRail Manager software provides one-click, non-disruptive patches and upgrades for the entire solution stack.
VxRail 7.0 does not support vSphere Lifecycle Manager (vLCM); vLCM is disabled in VMware vCenter.
|
Rolling Upgrade (1-by-1)
In HPE SimpliVity OmniStack 4.0.0 the Upgrade Manager allows upgrading host firmware (SVT Service Pack for Proliant) on HPE SimpliVity 2600 Gen10.
Previously, server firmware and drivers had to be updated by rebooting the server from an ISO that automatically installed the Service Pack for Proliant (SPP). Because this is an offline procedure, it was required to migrate VMs and put the node in maintenance mode first.
Upgrade Manager now allows you to generate firmware upgrade reports for individual hosts or clusters. These reports show version (previous and current) and state information for the individual firmware components.
|
1-Click
Scale Computing provides one-click software and firmware upgrades of HC3 nodes that typically takes minutes to complete, while all VMs remain online during the entire upgrade procedure.
|
|
|
|
Support |
|
|
Single HW/SW Support
Details
|
Yes
Dell EMC VxRail provides unified support for the entire native solution. This means Dell EMC is the single point-of-contact for both software and hardware issues. Prerequisite is that the separate VMware vSphere licenses have not been acquired through an OEM vendor.
|
Yes
HPE provides unified support for the entire native solution. This means HPE is the single point-of-contact for any storage software (HPE SimpliVity 2600) and server hardware (HPE Proliant) related issues.
|
Yes
|
|
Call-Home Function
Details
|
Full
VxRail uses EMC Secure Remote Services (ESRS). ESRS maintains connectivity with the VxRail hardware components around the clock and automatically notifies EMC Support if a problem or potential problem occurs.
VxRail is also supported by Dell EMC Vision. Dell EMC Vision offers Multi-System Management, Health monitoring and Automated log collection for a multitude of products from the Dell EMC family.
|
Full
HPE SimpliVity 2600s call-home function is called 'OmniWatch' and is fully integrated into the platform. Enabling the feature is simple and straightforward and requires very few clicks.
OmniWatch is a proactive and preventative service that continuously monitors and evaluates the health of a customer’s hyperconverged infrastructure. OmniWatch is able to identify problems before they impact the business by filing a support case and alerting a HPE SimpliVity 380 support expert of potential issues or risks.
OmniWatch is a basic support service and is included with every support plan.
HPE SimpliVity 3.7.10 introduces the following functionality for Hyper-V environments:
- Ability to disable or enable HPE OmniWatch data collection after deployment.
- Ability to configure a proxy server for one or more clusters.
HPE SimpliVity OmniStack 4.0.0 introduces support for HPE InfoSight cloud service. HPE InfoSight automatically monitors the health of each HPE SimpliVity host in the federation. Once a day, HPE InfoSight sends a report that includes information on the system status and significant events. It also contains details on:
- Cluster, host, virtual machine, and Virtual Controller names
- Host serial numbers
- Host IP addresses
- Virtual machine sizes
- Datastore details (name, physical capacity, free space, memory size)
The report does not contain any user identifying information such as user names or virtual machine IP addresses.
|
Full
When the Scale Computing HC3 state machines detect failure modes or significant issues, they notify the Scale Computing support group (by default settings) via SNMP. Also, the state machines themselves automatically remediate the issue if possible.
|
|
Predictive Analytics
Details
|
Full (not part of VxRail license bundle)
vRealize Operations (vROps) provides native vSAN support with multiplay dashboards for proactive alerting, heat maps, device and cluster insights, and streamlined issue resolution. Also vROps provides forward trending and forecasting to vSAN datastore as well as any another datastore (SAN/NAS).
vSAN 6.7 introduced 'vRealize Operations (vROps) within vCenter'. This provides end users with 6 dashboards inside vCenter console, giving insights but not actions. Three of these dashboards relate to vSAN: Overview, Cluster View, Alerts. One of the widgets inside these dasboards displays 'Time remaining before capacity runs out'. Because this provides only some very basic trending information, a full version of the vROps product is still required.
'vRealize Operations (vROps) within vCenter' is included with vSAN Enterprise and vSAN Advanced.
The full version of vRealize Operations (vROps) is licensed as a separate product from VMware vSAN.
In June 2019 an early access edition (=technical preview) of VxRail Analytical Consulting Engine (ACE) was released to the public for test and evaluation purposes. VxRail ACE is a cloud service that runs in a Dell EMC secure data lake and provides infrastructure machine learning for insights that can be viewed by end-user organizations on a Dell EMC managed web portal. On-premises VxRail clusters send advanced telemetry data to VxRail ACE by leveraging the existing SRS secyure transport mechanism in order to provide the cloud platform with raw data input. VxRail ACE is built on Pivotal Cloud Foundry.
VxRail ACE provides:
- global vizualization across all VxRail clusters and vCenter appliances.
- simplified health scores at the cluster and node levels;
- advanced capacity and performance metrics charting so problem areas (CPU, memory, disk, networking) can be pinpointed up to the VM level;
- future capacity planning by analyzing the previous 180 days of storage use data, and projecting data usage for the next 90 days.
VxRail ACE supports VxRail 4.5.215 or later, as well as 4.7.0001 or later. Sending advanced telemetry data to VxRail ACE is optional and can be turned off. The default collection frequency is once every hour.
VxRail ACE is designed for extensibility so that future visibility between VxRail ACE and vRealize Operations Manager is possible.
Because VxRail ACE is not Generally Available (GA) at this time, it is not yet considered in the WhatMatrix evaluation/scoring mechanism.
|
Full
HPE SimpliVity 2600s predictive analytics function is called 'OmniView'. OmniView is Software as a Service (SaaS) that runs in the HPE SimpliVity 2600 Support cloud and is accessible through a native web interface.
OmniView provides advanced insight into running HPE SimpliVity 2600 deployments through customizable dashboards and reports, allowing for easy visualization. OmniView keeps track of historical data about the deployment and uses predictive analytics to spot trends and make forecasts that can be used for resource planning. Resources include Host/VM CPU, Host/VM Memory, Host/VM Storage Performance (IOPS/Latency) and Host/VM Storage Capacity (Primary and Backup TBs).
OmniView offers the ability to drill down into multiple vCenter and HPE SimpliVity 2600 data points in great detail, enabling users to diagnose and troubleshoot any issues that may come up.
Currently OmniView is only included with mission critical support.
|
N/A
Scale Computing HC3 does not natively have predictive analytics capabilities.
|
|