Addon
|
|
|
|
|
|
|
custom |
|
|
Unique Feature 1
|
Add-On not supported by this product
|
Add-On not supported by this product
|
Add-On not supported by this product
|
|
|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
|
- + mature and feature-rich offering
- + great ecosystem & support
- + skills prevelant
|
|
|
Cons
|
|
- - needs clear strategy for public cloud future
- - comprehensive capability but can become expensive
- - NSX leading SDN solution but skills scarce
|
|
|
|
|
Content |
|
|
|
Enzo Raso (Citrix) and The VIRTUALIST
Content created by Enzo Rasso (Citrix) and THE VIRTUALIST: http://www.thevirtualist.org/
|
THE VIRTUALIST
Content created by THE VIRTUALIST
|
THE VIRTUALISTS
Content created by THE VIRTUALISTS
|
|
|
|
Assessment |
|
|
|
XenServer 7 Standard Edition includes Citrix support. XenServer 7.0 Standard Edition includes the following new features:
-Health Check
-Open FCoE support
-Software-boot-from-iSCSI for Cisco UCS
-Support Windows Server Containers
-Greatly increased configuration limits for host and guest memory as well as physical and virtual CPUs
-Static guest IP setting
For Standard vs Enterprise edition, please check: http://bit.ly/2ddQiVL
Citrix released XenServer 7.0 in May 2016, this major release added support for:
- Intel GPU virtualisation (“GVT-g”) which provides virtualised graphics without any additional hardware
- Direct Inspect APIs, enabling third party security vendors to protect apps and desktops, transparently scanning VM memory via the hypervisor with no agent within the VMs
- Health Check to automate the upload of logs to Citrix Insight Services, with the user receiving diagnostic results in the XenCenter management console
- Automated installation and updating of Windows IO drivers via the Windows Update mechanism
- New storage options including SMB (CIFS), open-FCoE, NFSv4, Software Boot from iSCSI on Cisco UCS
- Running Windows Docker containers on a Windows Server 2016 VM running on XenServer
- Numerous maintenance improvements including increased size of Dom0 disk partitions, with Log files kept on a larger, separate partition
As a major release XenServer 7.0 will be supported by Citrix for five years
Citrix entered the Hypervisor market with the acquisition of XenSource - the main supporter of the open source Xen project - in Oct 2007. The Xen project continues to exist, see https://www.xenproject.org/
|
vSphere 6.5 Standard - Click Here For Overview
vSphere is the collective term for VMwares virtualization platform, it includes the ESX hypervisor as well as the vCenter Management suite and associate components. vSphere is considered by many the industrys most mature and feature rich virtualization platform and had its origins in the initial ESX releases around 2001/2002.
|
vSphere 6.0 - Click Here For Overview
vSphere is the collective term for VMwares virtualization platform, it includes the ESX hypervisor as well as the vCenter Management suite and associate components. vSphere is considered by many the industrys most mature and feature rich virtualization platform and had its origins in the initial ESX releases around 2001/2002.
vSphere is available in various edition and bundles:
vSphere Editions (un-bundled)
- Hypervisor (free)
- Standard
- Enterprise
- Enterprise Plus
vSphere with Operations Management (OM) - (vSphere + VMware vCenter Operations Management Suite Standard)
- vSphere with Operations Management Standard Edition
- vSphere with Operations Management Enterprise Edition
- vSphere with Operations Management Enterprise Plus Edition
vSphere with Operations Management Acceleration Kits (AK) - (Convenience bundles that include: six processor licenses for vSphere with Operations Management + vCenter Server Standard license. Six processor licenses of vSphere Data Protection Advanced with the Enterprise and Enterprise Plus Acceleration Kits only)
vSphere with Operations Management Acceleration Kits decompose into their individual kit components after purchase. This allows customers to upgrade and renew SnS for each individual component.
- vSphere with Operations Management Acceleration Kit Standard
- vSphere with Operations Management Acceleration Kit Enterprise
- vSphere with Operations Management Acceleration Kit Enterprise Plus
vSphere Essential Kits - (all-in-one solutions for small environments up to three hosts with two CPUs each that include the vSphere processor licenses, vCenter Server for Essentials (for an environment of up to three hosts with up to 2 CPUs each). Scalability limits for the Essentials Kits are product enforced and cannot be extended other than by upgrading the whole kit to an Acceleration Kit. vSphere Essentials and Essentials Plus Kits are self-contained solutions and may not be decoupled, or combined with other vSphere editions.
- Essentials
- Essentials Plus
vSphere Remote Office Branch Office Editions - licensing is priced in packs of 25 VMs (Virtual Machines). Editions can be used in conjunction with an existing or separately purchased vCenter Server edition. SnS is required for at least one year. The 25 VM pack can be distributed across multiple sites. A maximum of a single 25 VM pack can be used in a single remote location or branch office.
- VMware vSphere Remote Office Branch Office Standard
- VMware vSphere Remote Office Branch Office Advanced
VMware vSphere Desktop
vSphere Desktop Edition is a vSphere Edition designed for licensing vSphere in VDI deployments. It can only be used for VDI deployment and can be leveraged with both VMware Horizon View and other third-party VDI connection brokers (e.g. Citrix XenDesktop)
VMwares cloud offerings (vCloud Suite) and VDI (VMware Horizon) offerings are separate (fee-based) products.
|
|
|
XenServer 7 Release date May 2016
Xens first public release was in 2003, becoming part of Novell SUSE 10 OS in 2005 (later also Red hat). In Oct 2007 Citrix acquired XenSource (the main maintainer of the Xen code) and released XenServer under the Citrix brand.
Version 5.6 was released in May 2010, 5.6SP2 in May 2011, 6.0 in Sep 2011, 6.1 in Sep 2012, 6.2 in June 2013 and 6.5 in January 2015.
|
Release Dates:
vSphere 6.5 : November 15th 2016
vSphere 6.5 is VMwares 6th generation of bare-metal enterprise virtualization software, from ESX1.x (2001/2), 2.x (2003) to Virtual Infrastructure 3 (2006), in May 2009 to vSphere 4.x. The ESXi architecture (small-footprint) became available in Dec 2007. vSphere 5 was announced July 2011 with GA August 2011 and was the first vSphere release converged on ESXi only, vSphere 5.1 was released 10th Sep 2012, vSphere 5.5 on Sep 22nd 2013, vSphere 6 on Feb 2nd 2015, vSphere 6.5 on November 15th 2016
|
Release Dates:
vSphere 6.0 : Feb 2nd 2015 (ESX: 2001/2002, ESXi: Dec-2007) - more in details
vSphere 6.0 is VMwares 6th generation of bare-metal enterprise virtualization software, from ESX1.x (2001/2), 2.x (2003) to Virtual Infrastructure 3 (2006), in May 2009 to vSphere 4.x. The ESXi architecture (small-footprint) became available in Dec 2007. vSphere 5 was announced July 2011 with GA August 2011 and was the first vSphere release converged on ESXi only, vSphere 5.1 was released 10th Sep 2012, vSphere 5.5 on Sep 22nd 2013, vSphere 6 on Feb 2nd 2015
|
|
|
|
Pricing |
|
|
|
Open Source (free) or two commercial editions, pricing for Standard Edition is: Annual: $345 / socket, Perpetual: $763 / socket - both including 1 year of software maintenance.
For commercial editions (Standard or Enterprise) XenServer is licensed on a per-CPU socket basis. For a pool to be considered licensed, all XenServer hosts in the pool must be licensed. XenServer only counts populated CPU sockets.
Customers who have purchased XenApp or XenDesktop continue to have an entitlement to XenServer.
In XenServer 7, customers should allocate product licenses using a Citrix License Server, as with other Citrix components. From version 6.2.0 onwards, XenServer (other than via the XenDesktop licenses) is licensed on a per-socket basis. Allocation of licenses is managed centrally and enforced by a standalone Citrix License Server, physical or virtual, in the environment. After applying a per-socket license, XenServer will display as Citrix XenServer Per-Socket Edition.
|
Std: $995/socket + S&S:$273 (B) or $323 (Prod);
Std + Ops Mgmt: $1995 / socket + S&S:$419 (B) or $499 (Prod)
vSphere is licensed per physical CPU (socket, not core), without restrictions on the amount of physical cores or virtual RAM configured. There are also no license restrictions on the number of virtual machines that a (licensed) host can run.
S&S basic or production (1Year example) - Production (P): 24 Hours/Day 7 Days/Week 365 Days/Year; Basic (B):12 Hours/Day Monday-Friday. Subscription and Support is mandatory. Details and other packages (Acceleration Kits Essential Kits are available). Details here: http://www.vmware.com/files/pdf/vsphere_pricing.pdf and here http://www.vmware.com/products/vsphere/pricing
|
Ent+ :
$3,495/socket + S&S 1Y: $734 (B) or $874 (Prod)
vSphere is licensed per physical CPU (socket, not core), without restrictions on the amount of physical cores or virtual RAM configured. There are also no license restrictions on the number of virtual machines that a (licensed) host can run.
S&S basic or production (1Year example) - Production (P): 24 Hours/Day 7 Days/Week 365 Days/Year; Basic (B):12 Hours/Day Monday-Friday. Subscription and Support is mandatory. Details and other packages (Acceleration Kits Essential Kits are available). Details here: http://www.vmware.com/files/pdf/vsphere_pricing.pdf and here http://www.vmware.com/products/vsphere/pricing
|
|
|
Free (XenCenter)
Citrix XenCenter is the Windows-native graphical user interface for managing Citrix XenServer. It is included for no additional cost (open source as well as commercial versions).
|
$5,995(Std) + $1,259(B) or $1,499 (P)
vCenter Server
Centralized visibility, proactive management and extensibility for VMware vSphere from a single console
VMware vCenter Server provides a centralized platform for managing your VMware vSphere environments, so you can automate and deliver a virtual infrastructure with confidence.
http://www.vmware.com/products/vcenter-server.html
|
$4995(Std) + $1049(B) or $1249 (P)
License price per vCenter Server, (S) Standard (for unlimited hosts), Fnd - vCenter Server Foundation (limited to 3 vSphere hosts) + S&S - Subscription and Support for 1 year, (P) Production: 24 Hours/Day 7 Days/Week 365 Days/Year; (B) Basic:12 Hours/Day Monday-Friday.
The vSphere client (for single server management) is free.
Licensing details here: http://www.vmware.com/files/pdf/vsphere_pricing.pdf
Pricing here: http://www.vmware.com/products/datacenter-virtualization/vsphere/pricing.html
|
|
Bundle/Kit Pricing
Details
|
No
no bundles or kits are documented
|
yes
Kits:
- VMware vSphere Remote Office Branch Office Editions
- VMware vSphere Essentials Kits
- VMware vSphere and vSphere with Operations Management Acceleration Kits
|
See vSphere AK with OM
See vSphere with OM
Select vSphere AK with OM - Enterprise Plus - from the menu
Select vSphere with OM - Enterprise Plus - from the menu
Select vSphere AK with OM - Enterprise - from the menu
Select vSphere with OM - Enterprise - from the menu
Select vSphere AK with OM - Standard - from the menu
Select vSphere with OM - Standard - from the menu
|
|
Guest OS Licensing
Details
|
No
A demo Linux VM is include, there are no guest OS licenses included with the XenServer license. Guest OSs need to be therefore licensed separately.
|
No
|
No
|
|
|
VM Mobility and HA
|
|
|
|
|
|
|
VM Mobility |
|
|
Live Migration of VMs
Details
|
Yes XenMotion (1)
XenMotion enables live migration of virtual machines across XenServer hosts without perceived downtime. XenServer supports only one migration at a time (i.e. sequential execution if multiple are started)
|
Yes vMotion
- Cross vSwitch vMotion Only
vMotion
- Cross vSwitch vMotion (all versions)
- Cross vCenter vMotion (n/a for Standard)
- Long Distance vMotion (n/a for Standard)
- Cross Cloud vMotion (n/a for Standard)
- Encryption vMotion (n/a for Standard)
|
Yes vMotion
vMotion Enhancements in vSphere 6.0
- Cross vSwitch vMotion (Enterprise Plus, Enterprise, Standard)
- Cross vCenter vMotion (Enterprise Plus only)
- Long Distance vMotionv (Enterprise Plus only)
- Motion Network improvements
- No requirement for L2 adjacency any longer!
- vMotion support for Microsoft Clusters using physical RDMs
VMotion supports 4 concurrent vMotion over 1Gbit networks (or 8 with 10Gbit Ethernet vMotion Connectivity).
vSphere 5.1 enabled users to combine vMotion and Storage vMotion into one operation. The combined migration copies both the virtual machine memory and its disk over the network to the destination host. In smaller environments, the ability to simultaneously migrate both memory and storage enables virtual machines to be migrated between hosts that do not have shared storage. In larger environments, this capability enables virtual machines to be migrated between clusters that do not have a common set of datastores. VMware does not use a specific term for this (shared nothing vMotion) capability as its now considered a standard vMotion capability (no specific license needed).
vSphere 5 enhanced vMotion performance through the ability to load balance VMotion over multiple adapters. vSphere 5 also introduces a new latency-aware Metro vMotion feature that provides better performance over long latency networks and also increases the round-trip latency limit for vMotion networks from 5 milliseconds to 10 milliseconds. (With the Metro vMotion feature the socket buffers are adjusted based on the observed round trip time (RTT) over the vMotion network allowing to sustain maximum throughput number with the higher latency.
|
|
Migration Compatibility
Details
|
Yes (Heterogeneous Pools)
XenServer 5.6 introduced Heterogeneous Pools which enables live migration across different CPU types of the same vendor (requires AMD Extended Migration or Intel Flex Migration), details here: http://bit.ly/1ADu7Py.
This capability is maintained in XenServer 7.x
|
Yes (EVC)
Enhanced vMotion Compatibility - enabled on vCenter cluster-level, utilizes Intel FlexMigration or AMD-V Extended Migration functionality available with most newer CPUs (but cannot migrate between Intel and AMD), Details here: http://kb.vmware.com/kb/1005764
|
Yes (EVC)
Enhanced vMotion Compatibility - enabled on vCenter cluster-level, utilizes Intel FlexMigration or AMD-V Extended Migration functionality available with most newer CPUs (but cannot migrate between Intel and AMD), Details here: http://kb.vmware.com/kb/1005764
|
|
|
Yes
Enabled through XenCenter
|
Yes
NEW
Maintenance mode is a core feature to prepare a host to be shut down safely.
vSphere 6.5: Faster Maintenance Mode and Evacuation Updates.
|
Yes
Maintenance mode (ability to put host into maintenance mode which will automatically live migrate all virtual machines onto other available hosts so that the host can be brought shut down safely) is a core feature enabled through vCenter Server management
|
|
Automated Live Migration
Details
|
No Workload Balancing
Workload Balancing (WLB) which was reintroduced in 6.5 is now enhanced with XenServer 7.0. Available in Enterprise edition.
|
No
vSphere 6.5
VM Distribution: Enforce an even distribution of VMs.
Memory Metric for Load Balancing: DRS uses Active memory + 25% as its primary metric.
CPU over-commitment: This is an option to enforce a maximum vCPU:pCPU ratios in the cluster.
Network Aware DRS - system look at the network bandwidth on the host when considering making migration recommendations.
Storage IO Control configuration is now performed using Storage Policies and IO limits enforced using vSphere APIs for IO Filtering (VAIO).
Storage Based Policy Management (SPBM) framework, administrators can define different policies with different IO limits, and then assign VMs to those policies.
|
Yes (DRS) - CPU, Mem, Storage IO
Distributed Resource Scheduler - handles initial vm to host placement and initiates vMotion based on host CPU and host memory constraints.
Enterprise: Storage DRS - n/a
Standard: DRS - n/a, Storage DRS - n/a
In vSphere, vSphere DRS can configure DRS affinity rules, which help maintain the placement of virtual machines on hosts within a cluster. Various rules can be configured. One such rule, a virtual machine-virtual machine affinity rule, specifies whether selected virtual machines should be kept together on the same host or kept on separate hosts. A rule that keeps selected virtual machines on separate hosts is called a virtual machine-virtual machine anti-affinity rule and is typically used to manage the placement of virtual machines for availability purposes.
Also new with 5.5 is the ability to live storage migrate vms protected by vSphere replication (or automate it with DRS)
Configuration is simple (tick box) and can be set to manual (recommendations only), partially automated (automatic placement) or fully automated.
With vSphere 5.1 Storage DRS is interoperable with vCloud Director.
vSphere 5 introduced Storage DRS - the ability to logically pool storage resources (datastore cluster) and migrate the actual virtual machine data to other disk resources based on performance criteria (i/o and latency). Storage DRS makes initial vmdk placement and gives migration recommendations to avoid I/O and space utilization bottlenecks on the datastores in the cluster. The migration is performed using (live) storage vMotion.
|
|
|
No Workload Balancing
Power management is part of XenServer Workload Balancing (WLB) available in Enterprise edition.
Background: Power Management introduced with 5.6 was able to automatically consolidate workloads on the lowest possible number of physical servers and power off unused hosts when their capacity is not needed. Hosts would be automatically powered back on when needed.
|
No
Distributed Power Management - enables to consolidate virtual machines onto fewer hosts and power down unused capacity - reducing power and cooling. This can be fully automated where servers are powered off when not needed and powered back on when workload increases.
|
Yes (DPM), Enhanced Host Power Management (C-States)
Distributed Power Management - enables to consolidate virtual machines onto fewer hosts and power down unused capacity - reducing power and cooling. This can be fully automated where servers are powered off when not needed and powered back on when workload increases.
Enterprise Plus, Enterprise only; Standard: DPM - n/a
Host Power Management (HPM) : ESXi 5.5 provides additional power savings by leveraging CPU deep process power states (C-states). In vSphere 5.1 and earlier, the balanced policy for host power management leveraged only the performance state (P-state), which kept the processor running at a lower frequency and voltage. In vSphere 5.5, the deep processor power state (C-state) also is used, providing additional power savings.
Please note that HPM and DPM are independent from each other, DPM controlled by vCenter and HPM by the ESX hypervisor
|
|
Storage Migration
Details
|
Yes (Storage XenMotion)
NEW
Storage XenMotion in XenServer 7.0 now works with the VM in any power state (stopped, paused or running)
XenServer 6.1 introduced the long awaited (live) Storage XenMotion capability
Storage XenMotion allows storage allocation changes while VMs are running or moved from one host to another including scenarios where a) VMs are NOT located on storage shared between the hosts (shared nothing live migration) and (b) hosts are not even in the same resource pool. This enables system administrators to:
- Live migration of a VM disk across shared storage targets within a resource pool (e.g. move between LUNs when one is at capacity);
- Live migration of a VM disk from one storage type to another storage type within a resource pool (e.g. perform storage array upgrades)
- Live migration of a VM disk to or from local storage on a XenServer host within a resource pool (reduce deployment costs by using local storage)
- Rebalance or move VMs between XenServer pools (for example moving a VM from a development environment to a production environment);
Starting with XenServer 6.1, administrators initiating XenMotion and Storage XenMotion operations can specify which management interface
transfers should occur over. Through the use of multiple management interfaces, the virtual disk transfer can occur with less impact on both core XenServer operations and VM network utilization.
Citrix supports up to 3 concurrent Storage XenMotion operations. The maximum number for (non-CDROM) VDIs per VM = six. Allowed Snapshots per VM undergoing Storage XenMotion = 1.
Technical details here: http://bit.ly/2cCtIJ3
|
Yes (Live Storage vMotion)
Storage vMotion allows to perform live migration of virtual machine disk files (e.g. across heterogeneous storage arrays) without vm downtime. Storage DRS handles initial vmdk placement and gives migration recommendations to avoid I/O and space utilization bottlenecks on the datastores in the cluster. The migration is performed using storage vMotion.
|
Yes (Live Storage vMotion); including replicated VMs
Storage vMotion allows to perform live migration of virtual machine disk files (e.g. across heterogeneous storage arrays) without vm downtime. Storage DRS handles initial vmdk placement and gives migration recommendations to avoid I/O and space utilization bottlenecks on the datastores in the cluster. The migration is performed using storage vMotion
With vSphere 5.5. the number of concurrent Storage vMotion operations per datastore is 8 and the number of concurrent Storage vMotion operations per host 2.
New in vSphere 5.5 is the ability to migrate VMs replicated with vSphere Replication (using Storage vMotion or Storage DRS) - note that Storage vMotion is only supported for the replicated VM, not the target replica.
Since vSphere 5.1 Storage DRS is compatible with vCloud Director.
vSphere 5 introduced support to migrate virtual machines that have snapshots/linked clones and improved the migration performance using i/o mirroring. vSphere 5 also introduced Storage DRS (enhanced, automated storage VMotion) - the ability to logically pool storage resources (datastore cluster) and migrate the actual virtual machine data to other disk resources based on performance policies.
|
|
|
|
HA/DR |
|
|
|
16 hosts / resource pool
16 Hosts per Resource Pool.
Please see XenServer 7.0 Configuration Limits document http://bit.ly/2dtaur1
|
max 64 nodes / 8000 vm per cluster
Up to 64 nodes can be in a DRS/HA cluster, with a maximum of 8000 vm/cluster
|
max 64 nodes / 8000 vm per cluster
NEW
Up to 64 nodes can be in a DRS/HA cluster, with a maximum of 8000 vm/cluster
|
|
Integrated HA (Restart vm)
Details
|
Yes
XenServer High Availability protects the resource pool against host failures by restarting virtual machine. HA allows for configuration of restart priority and failover capacity. Configuration of HA is simple (effort similar to enabling VMware HA).
|
Yes (VMware HA)
NEW
vSphere 6.5 Proactive HA detect hw condition evacuate host before failure (plugin provided OEM vendors)
Quarantine mode - host is placed in quarantine mode if it is considered in degraded state
Simplified vSphere HA Admission Control - 'Percentage of Cluster Resources' admission control policy
vSphere 6.0
- Support for Virtual Volumes – With Virtual Volumes a new type of storage entity is introduced in vSphere 6.0.
- VM Component Protection – This allows HA to respond to a scenario where the connection to the virtual machine’s datastore is impacted temporarily or permanently.
“Response for Datastore with All Paths Down”
“Response for Datastore with Permanent Device Loss”
- Increased scale – Cluster limit has grown from 32 to 64 hosts and to a max of 8000 VMs per cluster
- Registration of “HA Disabled” VMs on hosts after failure.
VMware HA restarts virtual machines according to defined restart priorities and monitors capacity needs required for defined level of failover.
vSphere HA in vSphere 5.5 has been enhanced to conform with virtual machine-virtual machine anti-affinity rules. Application availability is maintained by controlling the placement of virtual machines recovered by vSphere HA without migration. This capability is configured as an advanced option in vSphere 5.5.
vSphere 5.5 also improved the support for virtual Microsoft Failover Clustering (cluster nodes in virtual machines) - note that this functionality is independent of VMware HA and requires the appropriate Microsoft OS license and configuration of a Microsoft Failover Cluster. Microsoft clusters running on vSphere 5.5 now support Microsoft Windows Server 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.
While not obvious to the user - with vSphere 5, HA has been re-written from ground-up, greatly reducing configuration and failover time. It now uses a one master - all other slaves concept. HA now also uses storage path monitoring to determine host health and state (e.g. useful for stretched cluster configurations).
|
Yes (VMware HA)
NEW
vSphere 6.0
- Support for Virtual Volumes – With Virtual Volumes a new type of storage entity is introduced in vSphere 6.0.
- VM Component Protection – This allows HA to respond to a scenario where the connection to the virtual machine’s datastore is impacted temporarily or permanently.
“Response for Datastore with All Paths Down”
“Response for Datastore with Permanent Device Loss”
- Increased scale – Cluster limit has grown from 32 to 64 hosts and to a max of 8000 VMs per cluster
- Registration of “HA Disabled” VMs on hosts after failure.
VMware HA restarts virtual machines according to defined restart priorities and monitors capacity needs required for defined level of failover.
vSphere HA in vSphere 5.5 has been enhanced to conform with virtual machine-virtual machine anti-affinity rules. Application availability is maintained by controlling the placement of virtual machines recovered by vSphere HA without migration. This capability is configured as an advanced option in vSphere 5.5.
vSphere 5.5 also improved the support for virtual Microsoft Failover Clustering (cluster nodes in virtual machines) - note that this functionality is independent of VMware HA and requires the appropriate Microsoft OS license and configuration of a Microsoft Failover Cluster. Microsoft clusters running on vSphere 5.5 now support Microsoft Windows Server 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.
While not obvious to the user - with vSphere 5, HA has been re-written from ground-up, greatly reducing configuration and failover time. It now uses a one master - all other slaves concept. HA now also uses storage path monitoring to determine host health and state (e.g. useful for stretched cluster configurations).
|
|
Automatic VM Reset
Details
|
No
There is no automated restart/reset of individual virtual machines e.g. to protect against OS failure
|
Yes (VMware HA)
NEW
vSphere 6.5 - Orchestrated Restart - VMware has enforced the VM to VM dependency chain, for a multi-tier application installed across multiple VMs.
Uses heartbeat monitoring to reset unresponsive virtual machines.
|
Yes (VMware HA)
Uses heartbeat monitoring to reset unresponsive virtual machines
|
|
VM Lockstep Protection
Details
|
No
While XenServer can perform VM restart in case of a host failure there is no integrated mechanism to provide zero downtime failover functionality.
|
Yes (Fault Tolerance) 2 vCPUs.
NEW
vSphere 6.5 FT has more integration with DRS and enhanced Network (lower the network latency)
Fault Tolerance brings continuous availability protection for VMs with up to 4 vCPUs in Enterprise Plus and Standard is 2 vCPUs.
Uses a shadow secondary virtual machine to run in lock-step with primary virtual machine to provide zero downtime protection in case of host failure.
|
Yes (Fault Tolerance) 4 vCPUs.
NEW
vSphere Fault Tolerance for Multi-‐Processor VMs (SMP-FT) - Fault Tolerance now brings continuous availability protection for VMs with up to 4 vCPUs in Enterprise Plus; Enterprise and Standard is 2 vCPUs.
Uses a shadow secondary virtual machine to run in lock-step with primary virtual machine to provide zero downtime protection in case of host failure.
|
|
Application/Service HA
Details
|
No
There is no application monitoring/restart capability provided with XenServer
|
No
vSphere 6.5 Proactive HA detect hw condition evacuate host before failure (plugin provided OEM vendors)
Quarantine mode - host is placed in quarantine mode if it is considered in degraded state
Simplified vSphere HA Admission Control - 'Percentage of Cluster Resources' admission control policy
VMware HA restarts virtual machines according to defined restart priorities and monitors capacity needs required for defined level of failover.
|
App HA
vSphere 6.0 Application High Availability (App HA) – Expanded support for more business critical applications.
vSphere 5.5 introduces enhanced application monitoring for vSphere HA with vSphere App HA.
App HA works in conjunction with vSphere HA host monitoring and virtual machine monitoring to further improve application uptime. vSphere App HA can be configured to restart an application service when an issue is detected. It is possible to protect several commonly used applications. vSphere HA can also reset the virtual machine if the application fails to restart.
App HA leverages VMware vFabric Hyperic to monitor applications (you need to deploy two virtual appliances per vCenter Server: vSphere App HA and vFabric Hyperic - the App HA virtual appliance stores and manages vSphere App HA policies while vFabric Hyperic monitors applications and enforces vSphere App HA policies, vFabric Hyperic agents are installed in the virtual machines containing applications that will be protected by App HA).
vSphere 5 introduced the High Availability (vSphere HA) application monitoring API.
|
|
Replication / Site Failover
Details
|
Integrated Disaster Recovery (no storage array control)
XenServer 6 introduced the new Integrated Site Recovery (maintained in 7.0), replacing the StorageLink Gateway Site Recovery used in previous versions. It utilizes the native remote data replication between physical storage arrays and automates the recovery and failback capabilities. The new approach removes the Windows VM requirement for the StorageLink Gateway components and it works now with any iSCSI or Hardware HBA storage repository (rather only the restricted storage options with StorageLink support). You can perform failover, failback and test failover. You can configure and operate it using the Disaster Recovery option in XenCenter. Please note however that Site Recovery does NOT interact with the Storage array, so you will have to e.g. manually break mirror relationships before failing over. You will need to ensure that the virtual disks as well as the pool meta data (containing all the configuration data required to recreate your vims and vApps) are correctly replicated to your secondary site.
|
Yes (vSphere Replication)
DR Orchestration (Vendor Add-On: VMware SRM)
NEW
vSphere Replication is VMware’s proprietary hypervisor-based replication engine designed to protect running virtual machines from partial or complete site failures by replicating their VMDK disk files.
This version extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5.
|
Limited (native): vSphere Replication
Yes (with Vendor Add-On: SRM )
vSphere 6.0 vSphere Replication – Improved Scale and Performance for vSphere Replication. Improved Recover Point Objectives (RPOs) to 5 minutes. Support for 2000 VM replication per vCenter.
vSphere replication (hypervisor-based replication product) is included in vSphere (without having to purchase the fee-based Site Recovery Manager) since vSphere 5.1 (maintained in 5.5)
VMware vSphere Replication will manage replication at the virtual machine (VM) level through VMware vCenter Server. It will also enable the use of heterogeneous storage across sites and reduce costs by provisioning lower-priced storage at failover location. Changed blocks in the virtual machine disk(s) for a running virtual machine at a primary site are sent via the network to a secondary site, where they are applied to the virtual machine disks for the offline (protection) copy of the virtual machine.
Site Failover functionality can be achieved with vSphere replication but without Site Recover Manager (SRM - fee based Add-On) you will not achieve a fully automated site failover (without further scripting or other orchestration tools).
VMware introduced the built-in vSphere Replication function with vSphere 5 and SRM 5. Prior to vSphere 5.1 this feature was only enabled with Site Recovery Manager, a fee-based extension product - not the base vSphere product.
With 5.1 this feature became available with the standard vSphere editions (maintained with vSphere 5.5).
vSphere Replication 5.5 has the following new features:
- New user interface: You can access vSphere Replication from the Home screen of the vSphere Client.
- Multiple points in time recovery: configure the retention of replicas from multiple points in time (you can configure the number of retained instances and view details about the currently retained instances)
- Adding additional vSphere Replication servers: You can deploy multiple vSphere Replication servers to meet load balancing needs
- Interoperability with Storage vMotion and Storage DRS on the primary site
- vSphere 5.5 includes VMware vSAN as an experimental feature. You can use VMware Virtual SAN datastores as a target datastore when configuring replications, but it is not supported for use in a production environment.
- Configure vSphere Replication on virtual machines that reside on VMware vSphere Flash Read Cache storage. vSphere Flash Read Cache is disabled on virtual machines after recovery.
Please note that while Site Recovery Manager can be used to orchestrate vSphere replication i.e. automate the site failover - it is not included with this license, but can be purchased as fee based option - see Vendor Extensions section below.
|
|
|
Management
|
|
|
|
|
|
|
General |
|
|
Central Management
Details
|
Yes (XenCenter)
XenCenter is the central Windows-based multi-server management application (client) for XenServer hosts (including the open source version).
It is different to other management apps (e.g. SCVMM or vCenter) which typically have a management server/management client architecture where the management server holds centralized configuration information (typically in a 3rd party DB). Unlike these management consoles, XenCenter distributes management data across XenServer servers (hosts) in a resource pool to ensure there is no single point of management failure. If a management server (host) should fail, any other server in the pool can take over the management role. XenCenter is essentially only the client.
License administration is done using a separate web interface.
|
Yes (vCenter Server Standard)
NEW
vCenter Server Standard
Centralized visibility, proactive management and extensibility for VMware vSphere from a single console
VMware vCenter Server provides a centralized platform for managing your VMware vSphere environments, so you can automate and deliver a virtual infrastructure with confidence.
Available as Windows or Apliance VCSA with embeded or separate PSC.
- Simplified architecture (integrated vCenter Server Appliance, Update Manager included all-in-one), no Windows/SQL licensing
- Native High Availability (HA) of vCenter for the appliance is built-in – Automatic failover (Web Client may require re-login)
- Native Backup and Restore of vCenter appliance – Simplified backup and restore with a new native file-based solution. Restore the vCenter Server configuration to a fresh appliance and stream backups to external storage using HTTP, FTP, or SCP protocols.
- New HTML5-based vSphere Client that is both responsive and easy to use (Based on our new Clarity UI)
|
Yes (vCenter Server Standard + enhanced Web Client), enhanced vCSA, SSO
vCenter Server is the central management application for vSphere, it is available in the following editions:
- vCenter Server Essentials - Integrated management for vSphere Essentials Kits
- vCenter Server Foundation - Centralized management for up to three vSphere hosts
- vCenter Server Standard - Highly scalable management with rapid provisioning, monitoring, orchestration and control of all virtual machines in a vSphere environment
vCenter Standard also allows for multiple vCenter instances to be linked (linked mode for central management and alerting). vCenter Server Foundation and vCenter Server Essentials editions do not support Linked Mode.
Additional comments:
The initial vCenter Single Sign On (SSO) has been improved in 5.5:
- Simplified deployment (single installation model for customers of all sizes)
- Enhanced Microsoft Active Directory integration - The addition of native Active Directory support enables cross-domain authentication with one- and two-way trusts common in multi-domain environments.
- Architecture - Built from the ground up, this architecture removes the requirement of a database and now delivers a multi-master authentication solution with built-in replication and support for multiple tenants.
vSphere 5 introduced the capability to deploy vCenter Server as a Linux based appliance (rather than installing the individual components manually). This capability has been maintained with vSphere 5.5. and the scalability limitations have been improved in 5.5 (e.g. from 5 hosts and 50 vm to 100 hosts and 3000 virtual machines in 5.5).
We will list the limits separately as: vCSA in the matrix
vSphere 5 also introduced a web-based client that is targeted to be the primary method of management with version 5.1 (rather than the C# client app).
vSphere 5.5 further enhanced the feature set of the web client:
- full client support for Mac OS X
- Drag and drop (e.g. drag and drop vm on to hosts)
- new filters and dynamically updated object lists based on those filters
- new recent-items navigation aid
|
|
Virtual and Physical
Details
|
No
XenCenter focusses on managing the virtual infrastructure (XenServer host and the associated virtual machines).
|
Limited (plug-ins)
vCenter and associated components focus on management of virtual infrastructure - physical (non-virtualized) infrastructure will typically require separate management.
However, one can argue that there are increasingly aspects of physical management (bare metal host deploy, vCenter Operations capabilities, vCenter monitoring of physical hosts etc. but the core focusses on the virtual aspects).
Additionally, VMware encourages 3rd party vendors to provide management plug-ins for the vCenter Client (classic or web) that can manage peripheral components of the environment (3rd party storage, servers etc.).
|
Limited (plug-ins)
vCenter and associated components focus on management of virtual infrastructure - physical (non-virtualized) infrastructure will typically require separate management.
However, one can argue that there are increasingly aspects of physical management (bare metal host deploy, vCenter Operations capabilities, vCenter monitoring of physical hosts etc. but the core focusses on the virtual aspects).
Additionally, VMware encourages 3rd party vendors to provide management plug-ins for the vCenter Client (classic or web) that can manage peripheral components of the environment (3rd party storage, servers etc.).
Example for plugin guidelines here: http://bit.ly/17wF0RH
|
|
RBAC / AD-Integration
Details
|
Yes (hosts/XenCenter)
XenServer 5.6 introduced Role Based Access Control by allowing the mapping of a user (or a group of users) to defined roles (a named set of permissions), which in turn have the ability to perform certain operations. RBAC depends on Active Directory for authentication services. Specifically, XenServer keeps a list of authorized users based on Active Directory user and group accounts. As a result, you must join the pool to the domain and add Active Directory accounts before you can assign roles.
There are 6 default roles: Pool Admin, Pool Operator, VM Power Admin, VM Admin, VM Operator and Read Only - which can be listed and modified using the xe CLI.
Details here: http://bit.ly/2d8uA7C
|
Yes (vCenter and ESXi hosts)
Platform Services Controller (PSC) deals with identity management for administrators and applications that interact with the vSphere platform.
|
Yes (vCenter and ESXi hosts), enhanced SSO
The vCenter Single Sign On (SSO) introduced with vSphere 5.1 has been improved in 5.5, reflecting (negative) feedback on the initial release:
- Simplified deployment (single installation model for customers of all sizes)
- Enhanced Microsoft Active Directory integration - The addition of native Active Directory support enables cross-domain authentication with one- and two-way trusts common in multi-domain environments.
- Architecture - Built from the ground up, this architecture removes the requirement of a database and now delivers a multi-master authentication solution with built-in replication and support for multiple tenants.
Granular access control using Active Directory integration for vCenter instances as well as for vSphere hosts existed since vSphere 4.1, so users/admins can be centrally managed using the existing AD infrastructure and assigned roles and right.
|
|
Cross-Vendor Mgmt
Details
|
No (native)
Yes (Vendor Add-On: CloudPlatform)
XenCenter only manages Citrix XenServer hosts.
Comments:
- Citrixs Desktop Virtualization product (XenDesktop, fee based add-on) supports multiple hypervisors (ESX, XenServer, Hyper-V)
- The Citrix XenServer Conversion Manager in 6.1 now enables batch import of VMs created with VMware products into a XenServer pool to reduce costs of converting to a XenServer environment).
|
vRealize Automation (Vendor Add-On)
VMware vRealize Automation automates the delivery of personalized infrastructure, applications and custom IT services.
This cloud automation software lets you deploy across a multi-vendor hybrid cloud infrastructure, giving you both flexibility and investment protection for current and future technology choices.
http://www.vmware.com/mena/products/vrealize-automation.html
|
Limited (Free Add-On): Multi-Hypervisor Manager 1.1 ;
Yes (with Vendor Add-On: vCloud Automation Center)
vCenter Multi-Hypervisor Manager is a vCenter component that allows (basic) management of heterogeneous (currently translates to: Microsoft Hyper-V) hypervisors in VMware vCenter Server.
- VMware vCenter Multi-Hypervisor Manager is available to all vCenter Standard Edition customers as a free download
- vCenter Multi-Hypervisor Manager 1.1 is compatible with vCenter Server 5.1.x and 5.5 and vSphere Client 5.1.x and 5.5
- It is NOT available in the vSphere Web Client
- Number of supported third-party hosts has been improved to 50 (from 20) in vCenter Multi-Hypervisor Manager 1.1
As with any multi-hypervisor management the typical concerns remain: e.g. Delay of support for new versions (e.g. Server 2012), only subset of activities is supported, typically not allowing for the elimination of the native vendor tool.
Also, see this interesting article from Mike Laverick on real-life capability evaluation ... http://bit.ly/19CGJHM
Core Features (third party = MS Hyper-V):
- Third-party (MS Hyper-V) host management including add, remove, connect, disconnect, and view the host configuration.
- Ability to migrate virtual machines from third-party hosts to ESX or ESXi hosts.
- Ability to provision virtual machines on third-party hosts.
- Ability to edit virtual machine settings.
- Integrated vCenter Server authorization mechanism across ESX/ESXi and third-party hosts inventories for privileges, roles, and users.
- Automatic discovery of pre-existing third-party virtual machines
- Ability to perform power operations with hosts and virtual machines.
- Ability to connect and disconnect DVD, CD-ROM, and floppy drives and images to install operating systems.
With the addition of vCloud Automation Center, VMware has also multi-vendor cloud management capabilities. VMware vCloud Automation Center acts as a service governor, enabling policy-based provisioning across VMware-based private and public clouds, physical infrastructure, multiple hypervisors and Amazon Web Services.
VMware vCloud Automation Center is part of the vCloud Suite and is a fee-based add-on - see the cloud section for details.
|
|
Browser Based Mgmt
Details
|
No (Web Self Service: retired)
Web Self Service (retired with the launch of XenServer 6.2) was a lightweight portal which allowed individuals to operate their own virtual machines without having administrator credentials to the XenServer host. For large infrastructures, OpenStack is a full orchestration product with far greater capability; for a lightweight alternative, xvpsource.org offers a free open source product.
Related Citrix products have browser based access, for example Storefront (next generation of Web Interface)
|
Yes (vSphere Web Client, HTML5 Web Client)
vSphere Client - new version of the HTML5-based vSphere Client that will run alongside the vSphere Web Client. The vSphere Client is built right into vCenter Server 6.5 (both Windows and Appliance) and is enabled by default.
vSphere Web Client - improvements will help with the overall user experience. (Home screen reorganized, Performance improvements, Live refresh for power states, tasks.)
|
Yes - Enhanced Web Client (with enhanced SSO and BDE)
vSphere 6.0 vSphere Web Client Enhancements – Performance improvements to areas including login, home page loading, action menus, related objects and summary views. Streamlined component layout and optimized usability experience by flattening menus and navigation.
vSphere 5 introduced a web based (Adobe Flex based) vSphere client interface which makes the access to vCenter platform-independent.
VMware positioned the web client as the core management interface since vSphere 5.1 - largely matching and in many cases superseding functionality of the legacy client (many new features will only be accessible through the web client). If you e.g. want to create one of the new large 62TB vmdks, you can only do that from the web client.
vSphere 5.5. maintains both the legacy and web client (confirming continued use of the legacy client) and has further enhanced the capabilities of the web client itself, as well as the Single Sign On that enables central authentication.
vSphere 5.5 web client enhancements:
- full client support for Mac OS X
- Drag and drop (e.g. drag and drop vm on to hosts)
- new filters and dynamically updated object lists based on those filters
- new recent-items navigation aid
The initial vCenter Single Sign On (SSO) has been improved in 5.5:
- Simplified deployment (single installation model for customers of all sizes)
- Enhanced Microsoft Active Directory integration - The addition of native Active Directory support enables cross-domain authentication with one- and two-way trusts common in multi-domain environments.
- Architecture - Built from the ground up, this architecture removes the requirement of a database and now delivers a multi-master authentication solution with built-in replication and support for multiple tenants.
vSphere Big Data Extensions (BDE) is a new addition in vSphere 5.5 for vSphere Enterprise and Enterprise Plus Editions. BDE is available as a plug-in for the vSphere Web Client.
BDE is a tool that enables administrators to deploy and manage Hadoop clusters on vSphere. It simplifies the provisioning of the infrastructure and software services required for multi-node Hadoop clusters.
It performs the following functions on the virtual Hadoop clusters it manages:
- Creates, deletes, starts, stops and resizes clusters
- Controls resource usage of Hadoop clusters
- Specifies physical server topology information
- Manages the Hadoop distributions available to BDE users
- Automatically scales clusters based on available resources and in response to other workloads on the vSphere cluster
|
|
Adv. Operation Management
Details
|
No
There is no advanced operations management tool included with XenServer.
Additional Info:
XenServers Integration Suite Supplemental Pack allows inter-operation with Systems Center Operations Manager (SCOM). SCOM enables monitoring of host performance when installed on a XenServer host.
Both of these tools can be integrated with your XenServer pool by installing the Integration Suite Supplemental Pack on each of your XenServer hosts.
For virtual desktop environments Citrix EdgeSight is a performance and availability management solution for XenDesktop, XenApp and endpoint systems. EdgeSight monitors applications, devices, sessions, license usage, and the network in real time, allowing users to quickly analyze, resolve, and proactively prevent problems.
EdgeSight is a separate products not included with the XenServer license.
|
Limited (native) - vCenter Operations
Full (with Vendor Add-On: vRealize Operations)
VMware vRealize Operations. Optimize resource usage through reclamation and right sizing, cross-cluster workload placement and improved planning and forecasting. Enforce IT and configuration standards for an optimal infrastructure.
http://www.vmware.com/products/vrealize-operations.html
|
Limited (native) - vCenter Operations Manager Foundation
Full (with Vendor Add-On: VMware vRealize Suite)
The VMware vRealize Operation Suite is available in three editions targeting teams responsible for managing vSphere and virtual infrastructure, heterogeneous virtual and physical environments, or hybrid cloud infrastructure at the OS and application level:
1) vRealize Operations Suite Standard: vSphere Monitoring, Performance and Capacity Optimization (included free of charge in the vSphere and Acceleration Kit editions with Operations Management)
2) vRealize Operations Suite Advanced: Virtual and Physical Infrastructure Operations, including Monitoring, Performance, Capacity and Configuration Management
3) vRealize Operations Suite Enterprise: Hybrid Cloud Infrastructure Operations, including OS- and Application-Level Monitoring, Performance, Capacity and Configuration Management
For a detailed comparison see: http://www.vmware.com/uk/products/vrealize-operations/compare
|
|
|
|
Updates and Backup |
|
|
Hypervisor Upgrades
Details
|
The XenCenter released with XenServer 7.0 allows updates to be applied to all versions of XenServer (commercial and free)
NEW
With XenServer 7.0 patching and updating via the XenCenter management console (enabling automated GUI driven patch application and upgrades) is supported with the comercial and free XenServer versions.
XenServer 6 introduced the Rolling Pool Upgrade Wizard. The Wizard simplifies upgrades to XenServer 6.x by performing pre-checks with a step-by-step process that blocks unsupported upgrades. You can choose between automated or semi-automated, where automated can use a network based XenServer update repository (which you have to create) while semi-automated requires local install media. There are still manual steps required for both approaches and no scheduling functionality or online repository is integrated.
|
Yes (Update Manager)
NEW
VMware Update Manager (VUM) is now part of the vCenter Server Appliance.
VUM is using own postgress db, can benefit from VCSA native HA and Backup.
|
Yes (Update Manager)
VMware Update Manager is a native plugin for vCenter allowing the creation of defined patch base-lines which are automatically (scheduled) applied to host for ESX updates (using maintenance mode for seamless updates)
vSphere 5 introduced the ability to have concurrent updates on multiple hosts or even (where downtime is less important than update time) to update all hosts concurrently. It also facilitates the upgrade from ESX to ESXi and enhances VMware tools upgrades (less reboots) and enhanced virtual appliance upgrades.
|
|
|
No
There is no integrated update engine for guest OS of the virtual machines within Standard Edition of XenServer7.0
|
Limited (Update Manager)
With vSphere 5 update manager discontinued patching of guest operating systems. It does however provide upgrades of VMware Tools, upgrades of the virtual machine hardware for virtual machines and upgrades of virtual appliances.
|
Limited (Update Manager)
With vSphere 5 update manager discontinued patching of guest operating systems. It does however provide upgrades of VMware Tools, upgrades of the virtual machine hardware for virtual machines and upgrades of virtual appliances.
With vSphere 5.1 VMware introduced bundles (Standard and Standard Acceleration Kit with Operations Management) that include vCenter Protect Standard (Shavlik). vCenter Protect includes the ability to scan and patch vm and templates (Microsoft updates and other 3rd party apps), details here: http://www.vmware.com/uk/products/datacenter-virtualization/vcenter-protect/standard.html
|
|
|
yes
You can take regular snapshots, application consistent point-in-time snapshots (requires Microsoft VSS) and snapshots with memory of a running or suspended VM. All snapshot activities (take/delete/revert) can be done while VM is online.
|
Yes
VMware snapshots can be taken and committed online (including a snapshot of the virtual machine memory).
|
Yes
VMware snapshots can be taken and committed online (including a snapshot of the virtual machine memory).
|
|
Backup Integration API
Details
|
Limited
There is no specific backup framework for integration of 3rd party backup solutions as such however the XenServer API allows for scripting (e.g. utilizing XenServer snapshots) and basic integration with 3rd party backup products and scripting (e.g. utilizing XenServer snapshots) e.g. Unitrends Enterprise Backup
|
Yes (vStorage API Data Protection)
vStorage API for Data Protection: Enables integration of 3rd party backup products for centralized backup.
|
Yes (vStorage API Data Protection)
vStorage API for Data Protection: Enables integration of 3rd party backup products for centralized backup Vendor Link: http://www.vmware.com/products/vsphere/features-storage-api
|
|
Integrated Backup
Details
|
No (Retired)
Citrix retired the Virtual Machine Protection and Recovery (VMPR) in XenServer 6.2. VMPR was the method of backing up snapshots as Virtual Appliances.
Alternative backup products are available from Quadric Software, SEP, and Unitrends
Background:
With XenServer 6, VM Protection and Recovery (VMPR) became available for Advanced, Enterprise and Platinum Edition customers.
XenServer 5.6 SP1 introduced a basic XenCenter-based backup and restore facility, the VM Protection and Recovery (VMPR) which provides a simple backup and restore utility for your
critical vims. Regular scheduled snapshots are taken automatically and can be used to restore VMs in case of disaster. Scheduled snapshots can also be automatically archived to a remote CIFS or NFS share, providing an additional level of security.
Additionally the XenServer API allows for scripting of snapshots. You can also (manually or script) export and import virtual machines for backup purposes.
|
Yes (vSphere Data Protection) - Replication of backup data, granular backup and scheduling
vSphere® Data Protection is a backup and recovery solution designed for vSphere environments. Powered by EMC Avamar, it provides agent-less, image-level virtual machine backups to disk. It also provides application-aware protection for business-critical Microsoft applications (Exchange, SQL Server, SharePoint) along with WAN-efficient, encrypted backup data replication. vSphere Data Protection is fully integrated with vCenter Server and vSphere Web Client.
|
Yes (vSphere Data Protection) - Replication of backup data, granular backup and scheduling
vSphere® Data Protection is a backup and recovery solution designed for vSphere environments. Powered by EMC Avamar, it provides agent-less, image-level virtual machine backups to disk. It also provides application-aware protection for business-critical Microsoft applications (Exchange, SQL Server, SharePoint) along with WAN-efficient, encrypted backup data replication. vSphere Data Protection is fully integrated with vCenter Server and vSphere Web Client.
New features with VDP 5.5:
- Replication of backup data to EMC Avamar - move backup data offsite for disaster recovery purposes (backup data is de-duplicated at both the source and destination and only changed data segments are sent across the wire, bandwidth utilization is minimized)
- Direct-to-Host Emergency Restore - restore of a virtual machine directly to a host without the need for vCenter Server and the vSphere Web Client (e.g. use VDP to backup vCenter Server)
- Backup and restore of individual .vmdk files
- Granular scheduling for backup and replication jobs
- Flexible VDP storage management - when deploying VDP, separate data stores can be selected for the VDP guest OS and backup data partitions
Introduction to VDP here: http://www.vmware.com/files/pdf/techpaper/Introduction-to-Data-Protection.pdf
|
|
|
|
Deployment |
|
|
Automated Host Deployments
Details
|
No
There is no integrated host deployment mechanism in XenServer - manual local or network repository based deployments are required.
|
No
Auto Deploy is now part of the vCenter Server Appliance.
Integration with VCSA 6.5 can benefit from navive HA or Backup
Configurable trough GUI interface of web client.
Can manage 300+ hosts
Post boot scripts allow for aditional automation.
|
Yes - Auto Deploy
With vSphere 5.1 VMware added two deployment options: Stateless Caching and Stateful Installs. Stateless caching allows for host to boot even if the auto deploy environment is not available (boots from a locally cached storage device). With the Stateful install administrators can no leverage the auto deploy environment to deploy new (stateful) hosts (instead of e.g. scripted installs). For both of the new methods the host will require a local storage device .
vSphere 5 introduced the Auto Deploy feature. With Auto Deploy, the hosts loads the ESXi image directly from vCenter Server into the host memory. Unlike the other installation options, Auto Deploy didnt store ESXi configuration or state on the host disk. vCenter Server stores and manages ESXi updates and patching through an image profile, and, optionally, the host configuration through a host profile.
vSphere 5.0 also introduced the VMware ESXi Image Builder. The Image Builder is a PowerShell CLI command set that enables customers to customize their VMware ESXi images. With Image Builder, you can create VMware ESXi installation images (for autodeploy or installation) with a customized set of updates, patches and drivers. Using the Image Builder, customers place the VMware ESXi software packages (VIBs) into software depots. The administrator then uses the Image Builder PowerCLI to combine the VIBs from the separate depots with the default VMware ESXi installation image, to create a custom image profile that can then be used to install their VMware ESXi hosts.
|
|
|
Yes
Templates in XenServer are either the included pre-configured empty virtual machines which require an OS install (a shell with appropriate settings for the guest OS) or you can convert an installed (and e.g. sysprep-ed) VM into a custom template.
There is no integrated customization of the guest available, i.e. you need to sysprep manually.
You can NOT convert a template back into a VM for easy updates. You deploy a VM from template using a full copy or a fast clone using copy on write.
|
Yes (Content Library)
Content Library – Provides simple and effective management for VM templates, vApps, ISO images and scripts for vSphere Admins – collectively called “content” – that can be synchronized across sites and vCenter Servers.
vSphere 6.5 - possibility to Mount ISO, Customize VM from within the content library, Update an existing template with a new version.
|
Yes (Multi-‐Site Content Library)
NEW
vSphere 6.0 Multi-‐Site Content Library – Provides simple and effective management for VM templates, vApps, ISO images and scripts for vSphere Admins – collectively called “content” – that can be synchronized across sites and vCenter Servers.
Multi-‐Site Content Library in Enterprise Plus only; Enterprise, Standard - n/a
Deploy vm from templates with integrated guest customization for Windows and Linux guests (e.g. Sysprep for Windows guests). Images are customized, e.g. Sysprep-ed on deployment of a new vm from the template (not on conversion to template) allowing for quick conversion from vm to template and vice versa (enabling easy updates of templates). Templates can be updated using VMware Update Manager.
|
|
Tiered VM Templates
Details
|
vApp
XenServer 6 introduced vApps - maintained with v 6.5 and 7
A vApp is logical group of one or more related Virtual Machines (VMs) which can be started up as a single entity in the event of a disaster.
The primary functionality of a XenServer vApp is to start the VMs contained within the vApp in a user predefined order, to allow VMs which depend upon one another to be automatically sequenced. This means that an administrator no longer has to manually sequence the startup of dependent VMs should a whole service require restarting (for instance in the case of a software update). The VMs within the vApp do not have to reside on one host and will be distributed within a pool using the normal rules.
This also means that the XenServer vApp has a more basic capability than e.g. VMwares vApp or MSs Service Templates which contain more advanced functions.
|
Yes (vApp/OVF)
Open Virtualization Format (OVF)
vApp is a collection of virtual machines (VMs) and sometimes other vApps that host a multi-tier application, its policies and service levels.
|
Yes (vApp/OVF)
NEW
vSphere 6.0 Multi-‐Site Content Library – Provides simple and effective management for VM templates, vApps, ISO images and scripts for vSphere Admins – collectively called “content” – that can be synchronized across sites and vCenter Servers.
Multi-‐Site Content Library in Enterprise Plus only; Enterprise, Standard - n/a
vSphere allows you to create and deploy multi-tiered application in a packaged approach using vApps.
A vApp is a container that can contain one or more virtual machines alongside meta data describing the relationship and configuration settings for the components of the vApp. A vApp can powered on and power off, and can also be cloned. The distribution format for a vApp is OVF.
While you can not create vApp templates in vCenter as such, you can export and import a vApp as OVF template.
vCloud Director (fee based) uses vApps as the default deployment container and you can create vApp templates, vApps are created by deploying the template to the cloud for organization for which the template was created.
- vCloud Director 5.5 provides the ability to clone a powered-on or suspended vApp, including capturing the memory state of the virtual machines, and to save this clone to the catalog.
- vCloud Director prior to 5.5. required that users create a vApp template in the catalog prior to instantiating the vApp in the VDC. vCD 5.5 enables users to import and export vApps directly to and from the VDC without the need for an intermediate vApp template in the catalog
- vCloud Director 5.5 adds support for importing and exporting vApps using the OVA file format.
|
|
|
No
There is no integrated capability to create host templates, apply or check hosts for compliance with certain setting.
|
No
Host profiles eliminates per-host, manual, or UI-based host configuration and maintain configuration consistency using a reference configuration which can be applied or used to check compliance status.
vSphere 6.5 new features: Filters, Bookmarks
Host customizations (answer files) for offline customization
Copy settings between profiles
New pre-check, Compliance view enhancements
Remediation (DRS integrated), Parallel remediation
|
Yes (Host Profiles) - enhanced for Auto Deploy
Host profiles eliminates per-host, manual, or UI-based host configuration and maintain configuration consistency using a reference configuration which can be applied or used to check compliance status.
To accommodate configuring VMware ESXi hosts deployed with Auto Deploy in vSphere 5, several significant improvements have been made to Host Profiles. Host Profiles have been extended to include additional configuration settings such as support for iSCSI, FCoE, storage multipathing, individual device settings and kernel module settings. In addition Host Profiles now enables creating a per-host answer file. The answer file is used to store host-specific attributes that are not shared with other hosts. This facilitates the automated deployment of hosts using Auto Deploy, because the host-specific settings can be applied using the answer file, and the remaining shared configuration can then be applied using the Host Profile.
|
|
|
No
There are is no ability to associate storage profiles to tiers of storage resources in XenServer (e.g. to facilitate automated compliance storage tiering)
|
Yes (Storage Based Policy Management)
Using the Storage Based Policy Management (SPBM) framework, administrators can define different policies with different IO limits, and then assign VMs to those policies. This simplifies the ability to offer varying tiers of storage services and provides the ability to validate policy compliance.
The policy-driven control plane is the management layer of the VMware software-defined storage model, which automates storage operations through a standardized approach that spans across the heterogeneous tiers of storage in the virtual data plane.
Storage Policy-Based Management (SPBM) is VMware’s implementation of the policy-driven control plane which provides common management over:
- vSphere Virtual Volumes - external storage (SAN/NAS)
- Virtual SAN – x86 server storage
- Hypervisor-based data services – vSphere Replication or third-party solutions enabled by the vSphere APIs for IO Filtering.
|
Yes (Storage Policy-Based Management)
The policy-driven control plane is the management layer of the VMware software-defined storage model, which automates storage operations through a standardized approach that spans across the heterogeneous tiers of storage in the virtual data plane.
Storage Policy-Based Management (SPBM) is VMware’s implementation of the policy-driven control plane which provides common management over:
- vSphere Virtual Volumes - external storage (SAN/NAS)
- Virtual SAN – x86 server storage
- Hypervisor-based data services – vSphere Replication or third-party solutions enabled by the vSphere APIs for IO Filtering.
|
|
|
|
Other |
|
|
|
No
A resource pool in XenServer is hierarchically the equivalent of a vSphere or Hyper-V cluster. There is no functionality to sub-divide resources within a pool.
|
Limited (host-level only)
vSphere supports hierarchical resource pools (parent and child pools) for CPU and memory resources on individual hosts and across hosts in a cluster. They allow for resource isolation and sharing between pools and for access delegation of resources in a cluster. Please note that DRS must be enabled for resource pool functionality if hosts are in a cluster. If DRS is not enabled the hosts must be moved out of the cluster for (local) resource pools to work.
|
Yes
vSphere supports hierarchical resource pools (parent and child pools) for CPU and memory resources on individual hosts and across hosts in a cluster. They allow for resource isolation and sharing between pools and for access delegation of resources in a cluster. Please note that DRS must be enabled for resource pool functionality if hosts are in a cluster. If DRS is not enabled the hosts must be moved out of the cluster for (local) resource pools to work.
|
|
|
No (XenConvert: retired), V2V: yes
XenConvert was retired in XenServer 6.2
XenConvert allowed conversion of a single physical machine to a virtual machine. The ability to do this conversion is included in the Provisioning Services (PVS) product shipped as part of XenDesktop. Alternative products support the transition of large environments and are available from PlateSpin.
Note: The XenServer Conversion Manager, for converting virtual machines (V2V), remains fully supported.
Background:
XenConvert supported the following sources: physical (online) Windows systems OVF, VHD, VMDK or XVA onto these targets: XenServer directly, vhd, OVF or PVS vdisk
|
Yes (vCenter Converter)
VMware vCenter Converter transforms your Windows- and Linux-based physical machines and third-party image formats to VMware virtual machines.
http://www.vmware.com/products/converter.html
|
Yes (VMware Converter)
Performance and Reliability
Multiple simultaneous conversions enable large-scale virtualization implementations.
Quiescing and snapshotting of the guest operating system on the source machine before migrating the data ensures conversion reliability.
Hot cloning makes conversions non-disruptive, with no source server downtime or reboot.
Sector-based copying enhances cloning and conversion speed.
Support for cold cloning (conversion that requires server downtime and reboot) is available, in addition to hot cloning.
Management
Centralized management console allows users to queue up and monitor multiple simultaneous remote, as well as local, conversions.
Easy-to-use wizards minimize the number of steps to conversion.
Support for both local and remote cloning enables conversions in remote locations such as branch offices.
Interoperability
vCenter Converter supports many source physical machines, including Windows XP, Windows 2003, Windows Server and Linux. It also supports third-party disk images from Microsoft Hyper-V, Virtual Server and Virtual PC; Parallels Desktop; Symantec System Recovery; Norton Ghost; Acronis and StorageCraft.
See more at: http://www.vmware.com/uk/products/converter/features.html#sthash.lSA5zFyR.dpuf
|
|
Self Service Portal
Details
|
No (Web Self Service: retired in XS6.2)
Web Self Service was a lightweight portal which allowed individuals to operate their own virtual machines without having administrator credentials to the XenServer host.
|
No
Self-Service Portal functionality is primarily provided by components in VMwares vCloud Suite or vRealize Automation - a comprehensive cloud portfolio with a single purchase enabled with a per-processor licensing metric.
http://www.vmware.com/products/vcloud-suite.html
http://www.vmware.com/products/vrealize-automation.html
|
No (native)
Yes (with Vendor Add-On: vCloud Suite)
Self-Service Portal functionality is primarily provided by components in VMwares vCloud Suite - a comprehensive cloud portfolio with a single purchase enabled with a per-processor licensing metric.
The vCloud Suite 5.5 is available in three editions: vCloud Suite Standard, Advanced and Enterprise (but includes vSphere Enterprise Plus regardless of the cloud edition) and includes the following:
- vSphere Enterprise Plus (included in all 3 editions) ;
- vCloud Connector is now available as a free offering.
- NSX for vSphere can be purchased as an add-on to all editions of vCloud Suite and provides advanced networking and security features.
- vRealize Operations (Standard for Standard, Advanced for Advanced, Enterprise for Enterprise)
- vRealize Automation (Standard for Standard, Advanced for Advanced, Enterprise for Enterprise);
- Site Recovery Manager Enterprise (Ent. only);
- vCloud Suite can be extended to the hybrid cloud with the purchase of the vRealize Suite and vCloud Air
Details: http://www.vmware.com/products/vcloud-suite/compare
|
|
Orchestration / Workflows
Details
|
Yes (Workflow Studio)
Workflow Studio provides a graphical interface for workflow composition in order to reduce scripting. Workflow Studio allows administrators to tie technology components together via workflows. Workflow Studio is built on top of Windows PowerShell and Windows Workflow Foundation. It natively supports Citrix products including XenApp, XenDesktop, XenServer and NetScaler.
Available as component of XenDesktop suite, Workflow Studio was retired in XenDesktop 7.x
|
Yes (vRealize Orchestrator)
vRealize Orchestrator is included with vCenter Server Standard and allows admins to capture often executed tasks/best practices and turn them into automated workflows (drag and drop) or use out of the box workflows. An increasing number of plug-ins is available to enable automation of tasks related to related products.
http://www.vmware.com/products/vrealize-orchestrator.html
|
Yes (vCenter Orchestrator)
vCenter Orchestrator is included with vCenter Server Standard and allows vCenter Orchestrator allows admins to capture often executed tasks/best practices and turn them into automated workflows (drag and drop) or use out of the box workflows. An increasing number of plug-ins is available to enable automation of tasks related to related products, e.g. vCloud Director, Microsoft Active Directory, vCloud Automation Center etc.
New features on vCenter Orchestrator 5.5:
- More efficient workflow development experience
- Improved Workflow Schema
- Improved Scripting API Explorer
- Improved scalability and high availability (new Orchestrator cluster mode)
- vSAN support in the vCenter Server 5.5 plug-in
- Security Improvements
- Improved integration with the vSphere Web Client
- REST API enhancements
Details here: http://www.vmware.com/support/orchestrator/doc/vcenter-orchestrator-55-release-notes.html#new
vSphere 5 introduced newly designed workflows with enhanced ease of use and the ability to launch vCO directly from the new vSphere Web Client.
|
|
|
Basic (NetScaler - Fee-Based Add-On)
XenServer uses netfilter/iptables firewalling. There are no specific frameworks or APIs for antivirus or firewall integration.
The fee-based NetScaler provides various (network) security related capabilities through e.g.
- NetScaler Gateway: secure application and data access for Citrix XenApp, Citrix XenDesktop and Citrix XenMobile)
- NetScaler AppFirewall: secures web applications, prevents inadvertent or intentional disclosure of confidential information and aids in compliance with information security regulations such as PCI-DSS. AppFirewall is available as a standalone security appliance or as a fully integrated module of the NetScaler application delivery solution and is included with Citrix NetScaler, Platinum Edition.
Details here: http://bit.ly/17ttmKk
|
Yes (ESXi Firewall, vShield Endpoint)
new in vSphere 6.5:
- VM Encryption (n/a for Standard)
- vMotion Encryption (n/a for Standard)
- ESXi Secure Boot,
- Virtual Machine Secure Boot,
- Enhanced Logging
|
Free: ESXi Firewall, vShield Endpoint;
Advanced (with Vendor Add-On: NSX / vCloud Networking and Security)
vSphere contains standard security features and vShield Endpoint in all vSphere offerings except Essentials (vShield Endpoint offloads Anti Virus processing to a secure virtual appliance supplied by VMware partners).
vCloud Networking and Security (fee based Add-On) provides additional networking and security functionality delivered through virtual appliances, such as a virtual firewall, virtual private network (VPN), load balancing, NAT, DHCP and VXLAN-extended networks.
With the release of vCloud Networking and Security (vCNS) 5.5, vCNS is only available as part of vCloud Suite 5.5 - the vCNS standalone SKUs is End of Availability (EOA) effective September 30th, 2013
(Free) vSphere integrated security related features:
- Small hypervisor footprint - Simplifies deployment, maintenance and patching, and reduces vulnerability by presenting a much smaller attack surface.
- Software acceptance levels - Prevents unauthorized software installation.
- Robust APIs - Enable agentless monitoring, eliminating the need to install third-party software.
- Host firewall - Protects the vSphere host management interface with a configurable, stateless firewall.
- Improved logging and auditing - Log all host activity under the logged-in users account, making it easy to monitor and audit activity on the host.
- Secure syslog - Log messages on local and/or remote log servers, with remote logging via either SSL or TCP connections.
- AD integration - Configure the vSphere host to join an Active Directory domain; individuals requesting host access are automatically authenticated against the centralized user directory.
(Fee) vCloud Networking and Security provides:
- Firewall: Stateful inspection firewall that can be applied either at the perimeter of the virtual data center or at the virtual network interface card (vNIC) level directly in front of specific workloads.
- VPN: Industry-standard IPsec and SSL VPN capabilities, site-to-site VPN to link virtual data centers and enable hybrid cloud computing. The SSL VPN capability delivers remote administration into the virtual data center through a bastion host.
- Load balancer: A virtual-appliance-based load balancer. Placed at the edge of the virtual data center, the load balancer supports Web-, SSL- and TCP-based scale-out
- VXLAN: Creates Layer 2 logical networks across noncontiguous clusters or pods without the need for VLANs (vSphere Distributed Switch and multicast required).
Details here: http://www.vmware.com/files/pdf/products/vcns/vmware-vcloud-networking-and-security-overview.pdf
VMware has also announced the VMware NSX Platform for Network Virtualization - merging Nicira NVP and vCloud Networking and Security. Expected availability is Q4/13 (see Network Virtualization / SDN )
|
|
Systems Management
Details
|
Yes (API / SDKs, CIM)
XenServer includes a XML-RPC based API, providing programmatic access to the extensive set of XenServer management features and tools. The XenServer API can be called from a remote system as well as local to the XenServer host. Remote calls are generally made securely over HTTPS, using port 443.
XenServer SDK: There are five SDKs available, one for each of C, C#, Java, PowerShell, and Python. For XenServer 6.0.2 and earlier, these were provided under an open-source license (LGPL or GPL with the common linking exception). This allows use (unmodified) in both closed-and open-source applications. From XenServer 6.1 onwards the bindings are in the majority provided under a BSD license that allows modifications.
Citrix Project Kensho provided a Common Information Model (CIM) interface to the XenServer API and introduces a Web Services Management (WSMAN) interface to XenServer. Management agents can be installed and run in the Dom0 guest.
Details here: http://bit.ly/12nQl9f
|
vSphere’s REST API, PowerCLI, vSphere CLI, ESXCLI, Datacenter CLI
vSphere’s REST APIs have been extended to include VCSA and VM based management and configuration tasks. There’s also a new way to explore the available vSphere REST APIs with the API Explorer. The API Explorer is available locally on the vCenter server.
PowerCLI is now 100% module based, the Core module now supports cross vCenter vMotion by way of the Move-VM cmdlet.
The VSAN module has been bolstered to feature 13 different cmdlets which focus on trying to automate the entire lifecycle of VSAN.
ESXCLI, now features several new storage based commands for handling VSAN core dump procedures, utilizing VSAN’s iSCSI functionality, managing NVMe devices, and other core storage commands. NIC based commands such as queuing, coalescing, and basic FCOE tasks.
Datacenter CLI (DCLI), which is also installed as part of vCLI, can make use of all the new vSphere REST APIs!
|
SNMPv3 +CIM, esxcli, vMA
vSphere 5.1 added support for SNMPv3, which provides many improvements over SNMPv2. These include added security, with SNMP authentication, and added privacy, with SSL encryption. SNMPv3 also provides additional configuration capabilities through SNMP Set. Also in vSphere 5.1, the SNMP agent has been unbundled from the VMkernel and now runs as an independent agent on the host. This makes it easier to incorporate SNMP updates and patches because they are no longer tied to the vSphere kernel.
ESXis hardware management is based on CIM (with standard or vendor specific CIM providers) and integrated SNMP agents (hosts and vCenter).
vSphere 5 introduced a new, unified CLI, which is using a consistent look and feel for local and remote management + improved syntax.
The esxcli command is available locally on each VMware ESXi host via the ESXi shell, as part of the optional vCLI package that can be installed on any supported Windows or Linux server, or through the vSphere Management Assistant (vMA)
vMA is a Linux-based virtual machine that is pre-installed with a command-line interface and select third-party agents needed to manage your vSphere infrastructure. Administrators and developers can use vMA to run scripts and agents to manage vSphere 5.5, vSphere 5.1 and later, vSphere 5.0 and later systems. vMA includes the vSphere SDK for Perl and the vSphere Command-Line Interface (vSphere CLI).
|
|
|
Network and Storage
|
|
|
|
|
|
|
Storage |
|
|
Supported Storage
Details
|
DAS, SAS, iSCSI, NAS, SMB, FC, FCoE, openFCoE
XenServer data stores are called Storage Repositories (SRs), they support IDE, SATA, SCSI (physical HBA as well as SW initiator) and SAS drives locally connected, and iSCSI, NFS, SMB, SAS and Fibre Channel remotely connected.
Background: The SR and VDI abstractions allow advanced storage features such as Thin Provisioning, VDI snapshots, and fast cloning to be exposed on storage targets that support them. For storage subsystems that do not inherently support advanced operations directly, a software stack is provided based on Microsofts Virtual Hard Disk (VHD)
specification which implements these features.
SRs are storage targets containing virtual disk images (VDIs). SR commands provide operations for creating, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain.
Reference XenServer 7.0 Administrators Guide: http://bit.ly/2d8uA7C
Also refer to the XenServer Hardware Compatibility List (HCL) for more details.
|
DAS, NFS, FC, iSCSI, FCoE (HW&SW), vFRC, SDDC
vSphere 6.5 adds:
- Virtual SAN 6.5
- Virtual Volumes 2.0 (VVOL)
- VMFS 6
Support for 4K Native Drives in 512e mode
SE Sparse Default
Automatic Space Reclamation
Support for 512 devices and 2000 paths (versus 256 and 1024 in the previous versions)
CBRC aka View Storage Accelerator
vSphere 6.0 adds:
- Virtual SAN 6.0
- Virtual Volumes (VVOL)
- NFS 4.1 client
- NFS and iSCSI IPV6 support
- Storage Based Policy Management (SPBM) now available in all vSphere editions
- SIOC IOPS Reservations
- vSphere Replication
- Support for 2000 virtual machines per vCenter Server
|
DAS, NFS, FC, iSCSI, FCoE (HW&SW), vFRC, SDDC
vSphere 6.0 adds:
- Virtual SAN 6.0
- Virtual Volumes (VVOL)
- NFS 4.1 client
- NFS and iSCSI IPV6 support
- Storage Based Policy Management (SPBM) now available in all vSphere editions
- SIOC IOPS Reservations
- vSphere Replication
- Support for 2000 virtual machines per vCenter Server
vSphere 5.5 adds:
- End-to-End 16Gb FC support: Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
- vSphere Flash Read Cache that enables the pooling of multiple Flash-based devices into a single consumable vSphere construct called vSphere Flash Resource, replacing the previous Swap to SSD feature
- prior to vSphere 5.5 storage supported in MSCS environments was Fibre Channel (FC). With the vSphere 5.5 release, this restriction has been relaxed to include support for FCoE and iSCSI.
vSphere 5.1 added support for 16Gb HBA (prior to 5.1 you could use them but only at 8Gb speed) - still, in vSphere 5.1 there was no support for full, end-to-end 16Gb connectivity from host to array, to get full bandwidth, a number of 8Gb
connections must be created from the switch to the storage array
- vSphere 5.0 added SSD handling and optimization. The VMkernel automatically recognizes and tags SSD devices that are local to an ESXi host or are on the network. In addition, the VMkernel scheduler is modified to allow ESXi swap to extend to local or network SSD devices, which enables memory over commitment and minimizes performance impact.
- vSphere 5.0 also introduced a software FCoE adaptor. The software FCoE adaptor will require a network adaptor that can support partial FCoE offload capabilities before it can be enabled.
|
|
|
Yes (limited for SAS)
Dynamic multipathing support is available for Fibre Channel and iSCSI storage arrays (round robin is default balancing mode). XenServer also supports the LSI Multi-Path Proxy Driver (MPP) for the Redundant Disk Array Controller (RDAC) - by default this driver is disabled. Multipathing to SAS based SANs is not enabled, changes must typically be made to XenServer as SAS drives do not readily appear as emulated SCSI LUNs.
|
Yes (enhanced APD and PDL) PDL AutoRemove
vSphere uses natively integrated multi-path capability or can take advantage of vendor specific capabilities using vStorage APIs for Multipathing.
By default, ESXi provides an extensible multipathing module called the Native Multipathing Plug-In (NMP). Generally, the VMware NMP supports all storage arrays listed on the VMware storage HCL and provides a default path selection algorithm based on the array type. The NMP associates a set of physical paths with a specific storage device, or LUN. The specific details of handling path failover for a given storage array are delegated to a Storage Array Type Plug-In (SATP). The specific details for determining which physical path is used to issue an I/O request to a storage device are handled by a Path Selection Plug-In (PSP). SATPs and PSPs are sub plug-ins within the NMP module. With ESXi, the appropriate SATP for an array you use will be installed automatically. You do not need to obtain or download any SATPs.
PDL AutoRemove: Permanent device loss (PDL) is a situation that can occur when a disk device either fails or is removed from the vSphere host in an uncontrolled fashion. PDL detects if a disk device has been permanently removed. When the device enters this PDL state, the vSphere host can take action to prevent directing any further, unnecessary I/O to this device. With vSphere 5.5, a new feature called PDL AutoRemove is introduced. This feature automatically removes a device from a host when it enters a PDL state.
|
Yes (enhanced APD and PDL) PDL AutoRemove
vSphere uses natively integrated multi-path capability or can take advantage of vendor specific capabilities using vStorage APIs for Multipathing.
By default, ESXi provides an extensible multipathing module called the Native Multipathing Plug-In (NMP). Generally, the VMware NMP supports all storage arrays listed on the VMware storage HCL and provides a default path selection algorithm based on the array type. The NMP associates a set of physical paths with a specific storage device, or LUN. The specific details of handling path failover for a given storage array are delegated to a Storage Array Type Plug-In (SATP). The specific details for determining which physical path is used to issue an I/O request to a storage device are handled by a Path Selection Plug-In (PSP). SATPs and PSPs are sub plug-ins within the NMP module. With ESXi, the appropriate SATP for an array you use will be installed automatically. You do not need to obtain or download any SATPs.
Enterprise Plus, Enterprise only; Standard - n/a
PDL AutoRemove: Permanent device loss (PDL) is a situation that can occur when a disk device either fails or is removed from the vSphere host in an uncontrolled fashion. PDL detects if a disk device has been permanently removed. When the device enters this PDL state, the vSphere host can take action to prevent directing any further, unnecessary I/O to this device. With vSphere 5.5, a new feature called PDL AutoRemove is introduced. This feature automatically removes a device from a host when it enters a PDL state.
vSphere 5.1 improved All Paths Down (APD) and Permanent Device Loss (PDL) through the ability to handle more complex transient APD conditions. It does not allow hostd to become hung indefinitely when devices are removed in an uncontrolled manner.
• Enable VMware vSphere High Availability (vSphere HA) to detect PDL and be able to restart virtual machines
on other hosts in the cluster that might not have this PDL state on the datastore.
• Introduce a PDL method for those iSCSI arrays that present only one LUN for each target. These arrays were
problematic, because after LUN access was lost, the target also was lost. Therefore, the ESXi host had no way
of reclaiming any SCSI sense codes.
|
|
Shared File System
Details
|
Yes (SR)
XenServer uses the concept of Storage Repositories (disk containers/data stores). These SRs can be shared between hosts or dedicated to particular hosts. Shared storage is pooled between multiple hosts within a defined resource pool. All hosts in a single resource pool must have at least one shared SR in common. NAS, iSCSI (Software initiator and HBA are both supported), SAS, or FC are supported for shared storage.
|
Yes (VMFS v6)
NEW
VMwares clustered file system, allowing for concurrent access of multiple hosts for live migration, file based locking (to ensure data consistency), dynamic volume resizing etc.
new in VMFS 6
Support for 4K Native Drives in 512e mode
SE Sparse Default
Automatic Space Reclamation
Support for 512 devices and 2000 paths (versus 256 and 1024 in the previous versions)
CBRC aka View Storage Accelerator
|
Yes (VMFS v5)
VMwares clustered file system, allowing for concurrent access of multiple hosts for live migration, file based locking (to ensure data consistency), dynamic volume resizing etc.
- vSphere 5.5 increases the maximum size of a VMDK from 2TB-512 bytes to the new limit of 62TB. The maximum size of a virtual Raw Device Mapping (RDM) is also increasing, from 2TB-512 bytes to 62TB (including virtual machine snapshots)
- VMFS Heap Improvements - maximum of 256MB of heap, enables vSphere hosts to access all address space of a 64TB VMFS (addressing concerns when accessing open files of more than 30TB from a single vSphere host)
VMFS5 introduced the following enhancements: Unified Block size of 1MB, smaller Sub-Blocks (8k) and support for small files - all improving disk usage efficiency. Also the file locking performance has been improved.
|
|
|
Yes (iSCSI, FC, FCoE)
XenServer 7.0 adds Software-boot-from-iSCSI for Cisco UCS
Yes for XenServer 6.1 and later (XenServer 5.6 SP1 added support for boot from SAN with multi-pathing support for Fibre Channel and iSCSI HBAs)
Note: Rolling Pool Upgrade should not be used with Boot from SAN environments. For more information on upgrading boot from SAN environments see Appendix B of the XenServer 7.0 Installation Guide: http://bit.ly/2d8BkSV
|
Yes (FC, iSCSI, FCoE and SW FCoE)
Boot from iSCSI, FCoE, and Fibre Channel boot are supported
|
Yes (FC, iSCSI, FCoE and SW FCoE - NEW)
vSphere 5.1 enhanced the software FCoE initiator to enable an ESXi 5.1 host to install to and boot from an FCoE LUN, using the software FCoE initiator (via a NIC) rather than requiring a dedicated FCoE hardware adapter.
Boot from iSCSI, FCoE, and Fibre Channel boot are supported (if booting from NIC then the network adapter must support iBFT)
|
|
|
No
While there are several (unofficial) approaches documented, officially no flash drives are supported as boot media for XenServer 7.0.
|
Yes
Boot from USB is supported
|
Yes
Boot from USB is supported
|
|
Virtual Disk Format
Details
|
vhd, raw disk (LUN)
XenServer supports file based vhd (NAS, local), block device-based vhd (FC, SAS, iSCSI) using a Logical Volume Manager on the Storage Repository or a full LUN (raw disk)
|
vmdk, raw disk (RDM)
VMware Virtual Machine Disk Format (vmdk) and RAW Disk Mapping (RDM) - essentially a raw disk mapped to a proxy (making it appear like a VMFS file system)
|
vmdk, raw disk (RDM)
VMware Virtual Machine Disk Format (vmdk) and RAW Disk Mapping (RDM) - essentially a raw disk mapped to a proxy (making it appear like a VMFS file system)
|
|
|
2TB
For XenServer 7.0 the maximum virtual disk sizes are:
- NFS: 2TB minus 4GB
- LVM (block): 2TB minus 4 GB
Reference: http://bit.ly/2dtaur1
|
64TB
vSphere is increasing the maximum size of a virtual machine disk file (VMDK) from 2TB - 512 bytes to the new limit of 64TB. The maximum size of a virtual Raw Device Mapping (RDM) is also increasing, from 2TB - 512 bytes to 64TB in physical compatibility and 62TB in virtual compatibility. Virtual machine snapshots also support this new size for delta disks that are created when a snapshot is taken of the virtual machine.
|
62TB for vmdk, RDM and snapshots
vSphere is increasing the maximum size of a virtual machine disk file (VMDK) from 2TB - 512 bytes to the new limit of 62TB. The maximum size of a virtual Raw Device Mapping (RDM) is also increasing, from 2TB - 512 bytes to 62TB. Virtual machine snapshots also support this new size for delta disks that are created when a snapshot is taken of the virtual machine.
|
|
Thin Disk Provisioning
Details
|
Yes (Limitations on block)
XenServer supports 3 different types of storage repositories (SR)
1) File based vhd, which is a local ext3 or remote NFS filesystems, which supports thin provisioning for vhd
2) Block device-based vhd format (SAN based on FC, iSCSI, SAS) , which has no support for thin provisioning of the virtual disk but supports thin provisioning for snapshots
3) LUN based raw format - a full LUN is mapped as virtual disk image (VDI) so to is only applicable if the storage array HW supports that functionality
|
Yes
Thin provisioning allowing for disk space saving through allocation of space based on usage (not pre-allocation).
VMFS6 - Automatic Space Reclamation
|
Yes (incl. SE Sparse Disk)
Thin provisioning allowing for disk space saving through allocation of space based on usage (not pre-allocation). vSphere 5 introduced some enhancements with additional VAAI functionality and VMFS5 (see Storage APIs) including reclaiming unused space.
However, traditional thin provisioning does not address reclaiming stale or deleted data within a guest OS, leading to a gradual growth of storage allocation to a guest OS over time.
With the release of vSphere 5.1, VMware introduced a new virtual disk type, the space-efficient sparse virtual disk (SE sparse disk). One of its major features is the ability to reclaim previously used space within the guest OS. Another major feature of the SE sparse disk is the ability to set a granular virtual machine disk block allocation size according to the requirements of the application. Some applications running inside a virtual machine work best with larger block allocations; some work best with smaller blocks. This was not tunable in the past.
|
|
|
No
There is no NPIV support for XenServer
|
Yes (RDM only)
NPIV requires RDM (Raw Disk Mapping), it is not supported with VMFS volumes. NPIV requires supported switches (not direct storage attach).
|
Yes (RDM only)
NPIV requires RDM (Raw Disk Mapping), it is not supported with VMFS volumes. NPIV requires supported switches (not direct storage attach).
|
|
|
Yes - Clone on boot, clone, PVS, MCS
XenServer 6.2 introduced Clone on Boot
This feature supports Machine Creation Services (MCS) which is shipped as part of XenDesktop. Clone on boot allows rapid deployment of hundreds of transient desktop images from a single source, with the images being automatically destroyed and their disk space freed on exit.
General cloning capabilities: When cloning VMs based off a single VHD template, each child VM forms a chain where new changes are written to the new VM, and old blocks are directly read from the parent template. When this is done with a file based vhd (NFS) then the clone is thin provisioned. Chains up to a depth of 30 are supported but beware performance implications.
Comment: Citrixs desktop virtualization solution (XenDesktop) provides two additional technologies that use image sharing approaches:
- Provisioning Services (PVS) provides a (network) streaming technology that allows images to be provisioned from a single shared-disk image. Details: http://bit.ly/2d0FrQp
- With Machine Creation Services (MCS) all desktops in a catalog will be created off a Master Image. When creating the catalog you select the Master and choose if you want to deploy a pooled or dedicated desktop catalog from this image.
Note that neither PVS (for virtual machines) or MCS are included in the base XenServer license.
|
No
VMwares virtual image sharing technology (vComposer or linked clones) is supported with VMwares virtual desktop solution (Horizon View).
This functionality had been extended to vCloud Director, vRealize Automation, but is not a functionality included in the vSphere editions without vCD.
Both Horizon View and vCD (as part of the vCloud Suites) are fee-based Add-Ons.
|
No (native)
Yes (with Vendor Add-On: vCloud Suite)
VMwares virtual image sharing technology (vComposer or linked clones) is supported with VMwares virtual desktop solution (Horizon View).
This functionality had been extended to vCloud Director (part of the vCloud Suite that includes vSphere 5.5) but is not a functionality included in the vSphere editions without vCD.
Both Horizon View and vCD (as part of the vCloud Suites) are fee-based Add-Ons.
|
|
SW Storage Replication
Details
|
No
There is no integrated (software-based) storage replication capability available within XenServer
|
Yes (vSphere Replication)
vSphere Replication is VMware’s proprietary hypervisor-based replication engine designed to protect running virtual machines from partial or complete site failures by replicating their VMDK disk files.
This version extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5.
|
Yes - vSphere Replication
vSphere Replication copies a virtual machine to another location, within or between clusters, and makes that copy available for restoration through the vSphere Web Client or through a full disaster recovery product such as
Site Recovery Manager. vSphere Replication protects the virtual machine on an ongoing basis. Changed blocks in the virtual machine disk(s) for a running virtual machine at a primary site are sent via the network to a secondary site, where they are applied to the virtual machine disks for the offline (protection) copy of the virtual machine.
It enables the use of heterogeneous storage across sites and reduce costs by provisioning lower-priced storage at failover location.
vSphere Replication is typically limited to 500 protected virtual machines.
VMware introduced vSphere Replication with vSphere 5 and SRM 5 (hypervisor-based replication product). Prior to vSphere 5.1 this was only a feature enabled with Site Recovery Manager (SRM), a fee-based extension product - not the base vSphere product. With 5.1 this feature became available with the standard vSphere editions (maintained with 5.5).
vSphere Replication in 6.0– Improved Scale and Performance for vSphere Replication. Improved Recover Point Objectives (RPOs) to 5 minutes. Support for 2000 VM replication per vCenter.
vSphere Replication in 5.5:
- New user interface: You can access vSphere Replication from the Home screen of the vSphere Client. The vSphere Replication UI provides a summarized view of all vCenter Server instances that are registered with the same SSO and the status of each vSphere Replication extension.
- Multiple points in time recovery: This feature allows the vSphere Replication administrator to configure the retention of replicas from multiple points in time. You can recover virtual machines at different points in time (PIT), such as the last known consistent state.
- Adding additional vSphere Replication servers: You can deploy multiple vSphere Replication servers to meet load balancing needs.
- Outgoing and Incoming views.
- Interoperability with Storage vMotion and Storage DRS on the primary site: You can move the disk files of a replicated source virtual machine using Storage vMotion and Storage DRS, with no impact on the ongoing replication.
- vSphere 5.5 includes VMware Virtual SAN as an experimental feature. You can use VMware Virtual SAN datastores as a target datastore when configuring replications, but it is not supported for use in a production environment.
- Configure vSphere Replication on virtual machines that reside on VMware vSphere Flash Read Cache storage. vSphere Flash Read Cache is disabled on virtual machines after recovery.
vSphere Replication overview: http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Replication-Overview.pdf
vSphere Replication Limits: http://bit.ly/16PJKTe
|
|
|
IntelliCache
XenServer 6.5 introduced a read caching feature that uses host memory in the new 64-bit Dom0 to reduce IOPS on storage networks, improve LoginVSI scores with VMs booting up to 3x Faster. The read cache feature is available for XenDesktop & XenApp Platinum users who have an entitlement to this feature.
Within XenServer 7.0 LoginVSI scores of 585 have been attained (Knowledge Worker workload on Login VS 4.1)
IntelliCache is a XenServer feature that can (only!) be used in a XenDesktop deployment to cache temporary and non-persistent operating-system data on the local XenServer host. It is of particular benefit when many Virtual Machines (VMs) all share a common OS image. The load on the storage array is reduced and performance is enhanced. In addition, network traffic to and from shared storage is reduced as the local storage caches the master image from shared storage.
IntelliCache works by caching data from a VMs parent VDI in local storage on the VM host. This local cache is then populated as data is read from the parent VDI. When many VMs share a common parent VDI (for example by all being based on a particular master image), the data pulled into the cache by a read from one VM can be used by another VM. This means that further access to the master image on shared storage is not required.
Reference: http://bit.ly/2d8Bxpm
|
No
vSphere 5.5 introduced the vSphere Flash Read Cache that enables the pooling of multiple Flash-based devices into a single consumable vSphere construct called vSphere Flash Resource.
vSphere hosts can use the vSphere Flash Resource as vSphere Flash Swap Cache, which replaces the Swap to SSD feature previously introduced with vSphere 5.0. It provides a write-through cache mode that enhances virtual machines performance without the modification of applications and OSs.
At its core Flash Cache enables the offload of READ I/O from the shared storage to local SSDs, reducing the overall I/O requirements on your shared storage.
Documented maxima with vSphere 6.5:
- Virtual flash resource per host: 1
- Maximum cache for each virtual disk: 400GB
- Cumulative cache configured per host (for all virtual disks): 2TB
- Virtual disk size: 16TB
- Virtual host swap cache size: 4TB
- Flash devices (disks) per virtual flash resource: 8
|
Yes (vSphere Flash Read Cache)
vSphere 5.5 introduced the vSphere Flash Read Cache that enables the pooling of multiple Flash-based devices into a single consumable vSphere construct called vSphere Flash Resource.
vSphere hosts can use the vSphere Flash Resource as vSphere Flash Swap Cache, which replaces the Swap to SSD feature previously introduced with vSphere 5.0. It provides a write-through cache mode that enhances virtual machines performance without the modification of applications and OSs.
At its core Flash Cache enables the offload of READ I/O from the shared storage to local SSDs, reducing the overall I/O requirements on your shared storage.
Documented maxima with vSphere 5.5:
- Virtual flash resource per host: 1
- Maximum cache for each virtual disk: 400GB
- Cumulative cache configured per host (for all virtual disks): 2TB
- Virtual disk size: 16TB
- Virtual host swap cache size: 4TB
- Flash devices (disks) per virtual flash resource: 8
There is also the Storage Accelerator (CBRC) feature that can be enabled (only) with VMware Horizon View (VDI).
View Storage Accelerator is an in memory (ESXi Server Memory) cache, caching common image blocks when reading virtual desktop images. It is applicable to stateless (non-persistent) as well as stateful (persistent) desktops and is transparent to the guest virtual machine/desktop. It does not require any special storage array technology and provides additional performance benefits when used in conjunction with Storage Array technologies.
http://www.vmware.com/files/pdf/techpaper/vmware-view-storage-accelerator-host-caching-content-based-read-cache.pdf
|
|
|
No
There is no specific Storage Virtualization appliance capability other than the abstraction of storage resources through the hypervisor.
|
No (native)
Yes (with Vendor Add-On: vSAN 6.5)
VMware vSAN extend virtualization to storage with an integrated hyper-converged solution.
new in vSAN 6.5
Virtual SAN iSCSI Service (MS Cluster support)
2-Node Direct Connect (cross-connect two VSAN hosts with a simple ethernet cable)
512e drive support
|
No (native)
Yes (with Vendor Add-On: Virtual SAN 6.0)
Virtual SAN 6.0 delivers new all-flash architecture on flash devices to deliver high, predictable performance and sub-millisecond response times for some of the most demanding enterprise applications.
With double the scalability of up to 64 nodes per cluster and up to 200 virtual machines per host along with performance enhancements and highly efficient snapshot and clone technology, Virtual SAN 6.0 is the ideal storage platform for virtual machines.
Hosts per Cluster 64 (vSphere 5.x it was 32)
VMs per Host 200 (vSphere 5.x it was 100)
IOPS per Host 90K (vSphere 5.x it was 20K)
Snapshot depth per VM 32 (vSphere 5.x it was 2)
Virtual Disk size 62 TB (vSphere 5.x it was 2 TB)
Rack Awareness – Virtual SAN 6.0 Fault Domains provide the ability to tolerate rack failures and power failures in addition to disk, network and host failures.
vSphere Requirements
Virtual SAN 6.0 requires VMware vCenter Server 6.0. Both the Microsoft Windows version of vCenter Server and the VMware vCenter Server Appliance can manage Virtual SAN. Virtual SAN 6.0 is configurable and monitored exclusively from only VMware vSphere Web Client.
Virtual SAN requires a minimum of three vSphere hosts contributing local storage capacity in order to form a supported cluster. The minimum, three-host, configuration enables the cluster to meet the lowest availability requirement of tolerating at least one host, disk, or network failure. The vSphere hosts require vSphere version 6.0 or later.
Disk Controlers
Each vSphere host that contributes storage to the Virtual SAN cluster requires a disk controller. This can be a SAS or SATA host bus adapter (HBA) or a RAID controller. However, the RAID controller must function in one of two modes:
- Pass-through mode
- RAID 0 mode
Network Interface Cards (NIC)
In Virtual SAN hybrid architectures each vSphere host must have at least one 1Gb Ethernet or 10Gb Ethernet capable network adapter. VMware recommends 10Gb.
The All-flash architectures are only supported with 10Gb Ethernet capable network adapters. For redundancy and high availability, a team of network adapters can be configured on a per-host basis. The teaming of network adapters for link aggregation (performance) is not supported. VMware considers this to be a best practice but not necessary in building a fully functional Virtual SAN cluster.
|
|
Storage Integration (API)
Details
|
Integrated StorageLink (deprecated)
Integrated StorageLink was retired in XenServer 6.2.
Background: XenServer 6 introduced Integrated StorageLink Capabilities. It replaces the StorageLink Gateway technology used in previous editions and removes the requirement to run a VM with the StorageLink components. It provides access to use existing storage array-based features such as data replication, de-duplication, snapshot and cloning. Citrix StorageLink allows to integrate with existing storage systems, gives a common user interface across vendors and talks the language of the storage array i.e. exposes the native feature set in the storage array. StorageLink also provides a set of open APIs that link XenServer and Hyper-V environments to third party backup solutions and enterprise management frameworks. There is a limited HCL for StorageLink supporting arrays. Details: http://bit.ly/2dqKRUT
|
No
VMware provides various storage related APIs in order to enhance storage functionality and integration between storage devices and vSphere.
|
Yes (VASA, VAAI and VAMP)
VSAN takes advantage of capabilities surfaced by VASA, vSphere 5.5 introduced a new and simpler VAAI UNMAP/Reclaim command
VMware provides various storage related APIs in order to enhance storage functionality and integration between storage devices and vSphere.
vSphere 5.1 introduces additional VAAI NAS enhancements to enable array-based snapshots to be used for vCloud Director fast-provisioned vApps. This feature in vSphere 5.1 enables vCloud Director to off-load the creation of linked clones to the NAS storage array in a similar manner to how View does it in vSphere 5.0.
vSphere 5 had already extended the vStorage APIs by adding VASA (vSphere Storage API for Storage Awareness) and new VAAI primitives:
1) VASA : Storage Awareness is a new set of APIs that will enable VMware vCenter Server to detect the capabilities of the storage array LUNs (e.g. RAID level, SATA v SSD, availability) enabling to select the appropriate disk for virtual machine placement or the creation of datastore clusters. The storage hardware needs to be VASA enabled to provide this integration.
2) VAAI: vStorage API for Array Integration enables integration of array-based capabilities. It basically allows to offload task directly to storage array HW for better performance rather than executing it in software. It supports: Full copy, Block zeroing, Hardware-assisted locking
New with vSphere 5:
• vSphere® Thin Provisioning (Thin Provisioning), enabling the reclamation of unused space and monitoring of space usage for thin-provisioned LUNs (alerting if running out!)
• Hardware acceleration for NAS
• SCSI standardization by T10 compliancy for full copy, block zeroing and hardware-assisted locking
3) VAMP: vStorage API for Multi Pathing: Enables storage partners to create multipathing plugins for vendor specific capabilities.
|
|
|
Basic
Virtual disks on block-based SRs (e.g. FC, iSCSI) have an optional I/O priority Quality of Service (QoS) setting. This setting can be applied to existing virtual disks using the xe CLI.
Note: Bare in mind that QoS setting are applied to virtual disks accessing the LUN from the same host. QoS is not applied across hosts in the pool!
|
Limited (basic, no SIOC)
In vSphere 6.5 Storage IO Control has been reimplemented by leveraging the VAIO framework. You will now have the ability to specify configuration details in a VM Storage Policy and assign that policy to a VM or VMDK. You can also create a Storage Policy Component yourself and specify custom shares, limits and a reservation.
|
Yes (SIOC) - incl. NFS
Storage I/O Control allows the prioritization of storage disk access using shares across hosts in a cluster - kicks in when congestion occurs (exceeded latency) and dynamically allocates portions of hosts I/O queues to VMs running on the vSphere hosts based on shares assigned to the VMs.
Prior to 5.1 the default latency threshold for storage I/O control (SIOC) is 30msecs. With 5.1 rather than using a default/user selection for latency threshold, SIOC now can automatically determine the best threshold for a datastore.
SIOC is in addition to the basic disk shares and limit functionality which allows you to manage basic disk access priority for vm on the same host (not across hosts - so a single host with lower priority vm could congest I/O paths).
vSphere 5.0 extended Storage I/O Control to also support NFS datastores.
|
|
|
|
Networking |
|
|
Advanced Network Switch
Details
|
Yes (Open vSwitch) - vSwitch Controller
The Open vSwitch is fully supported
The vSwitch brings visibility, security, and control to XenServer virtualized network environments. It consists of a virtualization-aware switch (the vSwitch) running on each XenServer and the vSwitch Controller, a centralized server that manages and coordinates the behavior of each individual vSwitch to provide the appearance of a single vSwitch.
The vSwitch Controller supports fine-grained security policies to control the flow of traffic sent to and from a VM and provides detailed visibility into the behavior and performance of all traffic sent in the virtual network environment. A vSwitch greatly simplifies IT administration within virtualized networking environments, as all VM configuration and statistics remain bound to the VM even if it migrates from one physical host in the resource pool to another.
Details in the XenServer 7.0 vSwitch Controller Users Guide: http://bit.ly/2d98foz
|
No
vSphere Distributed Switch (VDS) spans multiple vSphere hosts and aggregates networking to a centralized datacenter-wide level, simplifying overall network management (rather than managing switches on individual host level) allowing e.g. the port state/setting to follow the vm during a vMotion (Network vMotion) and facilitates various other advanced networking functions - including 3rd party virtual switch integration.
Each vCenter Server instance can support up to 128 VDSs, each VDS can manage up to 2000 hosts.
|
Yes (VDS), Various Improvements (for LACP, DSCP for QoS, filtering etc.)
vSphere Distributed Switch (VDS) spans multiple vSphere hosts and aggregates networking to a centralized datacenter-wide level, simplifying overall network management (rather than managing switches on individual host level) allowing e.g. the port state/setting to follow the vm during a vMotion (Network vMotion) and facilitates various other advanced networking functions - including 3rd party virtual switch integration (fee-based Nexus 1000v), see Add-Ons.
Each vCenter Server instance can support up to 128 VDSs, each VDS can manage up to 500 hosts.
VDS Key features:
- Distributed Virtual Port Groups (DV Port Groups) - Port groups that specify port configuration options for each member port
- Distributed Virtual Uplinks (dvUplinks) - dvUplinks provide a level of abstraction for the physical NICs (vmnics) on each host
- Private VLANs (PVLANs) - PVLAN support enables broader compatibility with existing networking environments using the technology
- Network vMotion - Simplifies monitoring and troubleshooting by tracking the networking state (such as counters and port statistics) of each virtual machine as it moves from host to host on a VDS
- Bi-directional Traffic Shaping - Applies traffic shaping policies on DV port group definitions, defined by average bandwidth, peak bandwidth and burst size
- Third-party Virtual Switch Support - Switch extensibility for integration of third-party control planes, data planes and user interfaces, including the IBM 5000v and Cisco Nexus 1000v
Enhancements in vSphere 6.0:
- Network I/O Control (NIOC) Version 3
- Ability to reserve bandwidth at a VMNIC
- Ability to reserve bandwidth at a vSphere Distributed Switch (VDS) Portgroup
- SR-IOV support for 1024 Virtual Functions
Enhancements in vSphere 5.5:
- The enhanced link aggregation feature provides choice in hashing algorithms and also increases the limit on number of link aggregation groups
(22 new hashing algorithm options, 64 LAGs per host and 64 LAGs per VMware vSphere VDS, new workflows to configure LACP across a large number of hosts via templates)
- Additional port security is enabled through traffic filtering support based on three types of qualifiers
(MAC source and destination address, traffic types, and IP attributes like source/target IP etc.).
- Prioritizing traffic at layer 3 increases Quality of Service support.
(Differentiated Service Code Point - DSCP - tagging to enable users to insert tags in the IP header in layer 3 (routing) environments. Physical routers function better with an IP header tag than with an Ethernet header tag (802.1p)
- A packet-capture tool provides monitoring at the various layers of the virtual switching stack.
- Other enhancements include improved single-root I/O virtualization (SR-IOV) support and 40GB NIC support.
(Workflow of configuring the SR-IOV - enabled physical NICs is simplified, additionally users can communicate the port group properties defined on the vSphere standard switch (VSS) or VDS to the virtual functions)
- 40GB NIC Support (Support for 40GB NICs, in the initial release the functionality is delivered via Mellanox ConnextX-3 VPI adapters configured in Ethernet mode)
vSphere 5.1 introduced a number of improvements on a) operational b) troubleshooting and c) scalability aspects.
a) Operational:
1) Network health check
2) VDS configuration backup and restore
3) Management network rollback and recovery
4) Distributed port - auto expand
5) MAC address management
6) Link Aggregation Control Protocol (LACP) support
7) Bridge Protocol Data unit filter
b) Monitoring/trouble shooting:
In vSphere 5.1, the port mirroring feature is enhanced through the additional support for RSPAN and ERSPAN capability. The NetFlow feature now supports NetFlow version 10-also referred to as Internet Protocol Flow Information eXport (IPFIX), an IETF
standard-rather than the old NetFlow version 5. This release also provides enhancements to SNMP protocol by supporting all three versions (v1, v2c, v3) with enhanced networking MIBs.
c) Scalability
Number of VDS per vCenter: 128 (up from 32 with 5.0), hosts per VDS: 500 (up from 350) etc.
Details here: http://pubs.vmware.com/vsphere-51/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-51-networking-guide.pdf
vSphere 5 introduced three new features in the Distributed Switch that provide enhanced monitoring and troubleshooting capability:
- NetFlow: NetFlow is a networking protocol that collects IP traffic information as records and sends them to a collector for traffic flow analysis.
- Port Mirror: Port mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network monitoring device connected to another switch port. This provides intra and inter host network traffic monitoring capabilities.
- LLDP: with vSphere 5.0 VMware now supports IEEE 802.1AB standard-based Link Layer Discovery Protocol (LLDP). LLDP helps management and configuration of heterogeneous network devices from different vendors. Prior to this vSphere already supported Ciscos CDP discovery protocol.
Please note that these new features are not backwards compatible with earlier vDS versions - when creating a vDS you now need to select the correct virtual hardware version of the vDS.
|
|
|
Yes (incl. LACP - New)
XenServer 6.1 added the following functionality, maintained with XenServer 6.5:
- Link Aggregation Control Protocol (LACP) support: enables the use of industry-standard network bonding features to provide fault-tolerance and load balancing of network traffic.
- Source Load Balancing (SLB) improvements: allows up to 4 NICs to be used in an active-active bond. This improves total network throughput and increases fault tolerance in the event of hardware failures. The SLB balancing algorithm has been modified to reduce load on switches in large deployments.
Background:
XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:
• LACP bonding is only available for the vSwitch, while active-active and active-passive are available for both the vSwitch and Linux bridge.
• When the vSwitch is the network stack, you can bond either two, three, or four NICs.
• When the Linux bridge is the network stack, you can only bond two NICs.
XenServer 6.1 provides three different types of bonds, all of which can be configured using either the CLI or XenCenter:
• Active/Active mode, with VM traffic balanced between the bonded NICs.
• Active/Passive mode, where only one NIC actively carries traffic.
• LACP Link Aggregation, in which active and stand-by NICs are negotiated between the switch and the server.
Reference: http://bit.ly/2d8uA7C
|
Yes (no LACP Support)
vSphere has integrated NIC teaming capabilities. To utilize NIC teaming, two or more network adapters must be uplinked to a virtual switch (standard or distributed).
The key advantages of NIC teaming are:
- Increased network capacity for the virtual switch hosting the team.
- Passive failover in the event one of the adapters in the team goes down
There are various NIC load balancing (e.g. based on originating port, source MAC or IP hash) and failover detection algorithms (link status, Beacon probing).
|
Yes (up to 32 NICs)
vSphere has integrated NIC teaming capabilities. To utilize NIC teaming, two or more network adapters must be uplinked to a virtual switch (standard or distributed).
The key advantages of NIC teaming are:
- Increased network capacity for the virtual switch hosting the team.
- Passive failover in the event one of the adapters in the team goes down
There are various NIC load balancing (e.g. based on originating port, source MAC or IP hash) and failover detection algorithms (link status, Beacon probing) in vSphere, for details refer to: http://bit.ly/16Sm1ll
The maximum number of supported (and teamed) adapters per host varies by vendors and speed (max 32 for 1Gb and 8 for 10Gb)
New in vSphere 5.5. is the enhancement of the LACP capabilities.
The enhanced link aggregation feature provides choice in hashing algorithms and also increases the limit on number of link aggregation groups (22 new hashing algorithm options, 64 LAGs per host and 64 LAGs per VMware vSphere VDS, new workflows to configure LACP across a large number of hosts via templates)
Added in 5.1 was the Link Aggregation Control Protocol (LACP) support - which is a standards-based method to control the bundling of several physical network links together to form a logical channel for increased bandwidth and redundancy purposes.
LACP enables a network device to negotiate an automatic bundling of links by sending LACP packets to the peer. As part of the vSphere 5.1 release, VMware now supports this standards-based link aggregation protocol.
|
|
|
Yes (limited for mgmt. traffic)
VLANs are supported with XenServer. To use VLANs with virtual machines use switch ports configured as 802.1Q VLAN trunk ports in combination with the XenServer VLAN feature to connect guest virtual network interfaces (VIFs) to specific VLANs (you can create new virtual networks with XenCenter and specify the VLLAN IDs). The XenServer management interfaces cannot be assigned to a XenServer VLAN via a trunk port - to place management traffic on a desired VLAN the switch ports need to be configured to perform 802.1Q VLAN tagging/untagging (native VLAN or as access mode ports). In this case the XenServer host is unaware of any VLAN configuration.
XenServer 6.1 removed a previous limitation which caused VM deployment delays when large numbers of VLANs were in use. This improvement enables administrators using XenServer 6.x and later to deploy hundreds of VLANs in a XenServer pool quickly.
|
Yes
Support for VLANs, VLAN tagging with distributed or standard switch. Private VLANs (sub-VLANs) are supported with the virtual distributed switch only
|
Yes
Support for VLANs, VLAN tagging with distributed or standard switch. Private VLANs (sub-VLANs) are supported with the virtual distributed switch only
|
|
|
No
XenServer does not support PVLANs.
Please refer to the Citrix XenServer Design: Designing XenServer Network Configurations guide for details on network design and security considerations http://bit.ly/2ctPL11
|
No
Private VLANs (sub-VLANs) are supported with the virtual distributed switch.
|
Yes
Private VLANs (sub-VLANs) are supported with the virtual distributed switch.
|
|
|
Yes (guests only)
XenServer 6.1 introduced formal support for IPv6 in XenServer guest VMs (maintained with 6.5). Customers already used it with e.g. 6.0 but the 6.1 release notes list this as new official feature: IPv6 Guest Support: enables the use of IPv6 addresses within guests allowing network administrators to plan for network growth.
Full support for IPv6 (i.e. assigning the host itself an IPv6 address) will be addressed in the future.
|
Yes
vSphere supports IPv6 for all major traffic types.
|
Yes
vSphere supports IPv6 for all major traffic types (guest Oss, management and VMkernel e.g. IP based Storage)
New in vSphere 5.5
- TCP Checksum Offload: For Network Interface Cards (NICs) that support this feature, the computation of the TCP checksum of the IPv6 packet is offloaded to the NIC.
- Software Large Receive Offload (LRO): LRO is a technique of aggregating multiple incoming packets from a single stream into a larger buffer before they are passed higher up the networking stack, thus reducing the number of packets that have to be processed and saving CPU. Many NICs do not support LRO for IPv6 packets in hardware. For such NICs, LRO has been implemented in the vSphere network stack.
- Zero-Copy Receive: This feature prevents an unnecessary copy from the packet frame to a memory space in the vSphere network stack. Instead, the frame is processed directly.
vSphere 5.1 offers the same features, but only for IPv4. So, in vSphere 5.1, services such as vMotion, NFS, and Fault Tolerance had lower bandwidth in IPv6 networks when compared to IPv4 networks. vSphere 5.5 solves that problem-it delivers similar performance over both IPv4 and IPv6 networks.
Source: http://bit.ly/16SqiVU
|
|
|
SR-IOV
XenServer 6.0 provided improved SR-IOV support for once specific card, Intels Niantic NIC, this card is not officialy supported for XenServer7.0
Generally with SR-IOV VF, functions that require VM mobility like live migration, workload balancing, rolling pool upgrade, High Availability and Disaster Recovery are not possible.
|
VMDirectPath (No SR-IOV)
vSphere 6.5 SR-IOV support for 1024 Virtual Functions.
|
Yes (SR-IOV and VMDirectPath)
vSphere 6.0 SR-IOV support for 1024 Virtual Functions.
Enterprise Plus only; Enterprise, Standard - SR-IOV - n/a
vSphere 5.1 introduced support for SR-IOV (maintained in 5.5.) in addition to DirectPath I/O.
SR-IOV enables one PCI Express (PCIe) adapter to be presented as multiple, separate logical devices to virtual machines.
SR-IOV offers performance benefits and tradeoffs similar to those of DirectPath I/O. DirectPath I/O and SRIOV have similar functionality but you use them to accomplish different things. SR-IOV is beneficial in workloads with very high packet rates or very low latency requirements. Like DirectPath I/O, SR-IOV is not compatible with certain core virtualization features, such as vMotion (exception: Cisco UCS through VM-FEX)
SR-IOV does, however, allow for a single physical device to be shared amongst multiple guests.
With DirectPath I/O you can map only one physical function to one virtual machine. SR-IOV lets you share a single physical device, allowing multiple virtual machines to connect directly to the physical function. This functionality allows you to virtualize low-latency (less than 50 microsec) and high PPS (greater than 50,000) such as network appliances or purpose built solutions workloads.
vSphere 5.5. introduces some improvements to the SR-IOV support:
- The workflow of configuring the SR-IOV-enabled physical NICs is simplified. A new capability is introduced that enables users to communicate the port group properties defined on the vSphere standard switch (VSS) or VDS to the
virtual functions.
The new control path through VSS and VDS communicates the port group-specific properties to the virtual functions. For example, if promiscuous mode is enabled in a port group, that configuration is then passed to virtual functions, and the virtual machines connected to the port group will receive traffic from other virtual machines.
|
|
|
Yes
You can set the Maximum Transmission Unit (MTU) for a XenServer network in the New Network wizard or for an existing network in its Properties window. The possible MTU value range is 1500 to 9216.
|
Yes
vSphere supports jumbo frames for network traffic including iSCSI, NFS, vMotion and FT
|
Yes
vSphere supports jumbo frames for network traffic including iSCSI, NFS, vMotion and FT
|
|
|
Yes (TSO)
TCP Segmentation Offload can be enabled, see http://bit.ly/13e9WLi
By default, Large Receive Offload (LRO) and Generic Receive Offload (GRO) are disabled on all physical network interfaces. Though unsupported, you can enable it manually http://bit.ly/2djwZiZ
|
Yes (TSO, NetQueue, iSCSI)
Supports TCP Segment Offloading, NetQueue (VMwares implementation of Intels VMDq and iSCSI HW offload (for a limited number of HBAs).
No TOE support (you can use TOE capable adapters in vSphere but the TOE function itself will not be used)
|
Yes (TSO, NetQueue, iSCSI)
Supports TCP Segment Offloading, NetQueue (VMwares implementation of Intels VMDq and iSCSI HW offload (for a limited number of HBAs).
No TOE support (you can use TOE capable adapters in vSphere but the TOE function itself will not be used)
|
|
|
Yes (outgoing)
QoS of network transmissions can be done either at the vm level (basic) by setting a ks/sec limit for the virtual NIC or on the vSwitch-level (global policies). With the DVS you can select a rate limit (with units), and a burst size (with units). Traffic to all virtual NICs included in this policy level (e.g. you can create vm groups) is limited to the specified rate, with individual bursts limited to the specified number of packets. To prevent inheriting existing enforcement the QoS policy at the VM level should be disabled.
Background:
To limit the amount of outgoing data a VM can send per second, you can set an optional Quality of Service (QoS) value on VM virtual interfaces (VIFs). The setting lets you specify a maximum transmit rate for outgoing packets in kilobytes per second.
The QoS value limits the rate of transmission from the VM. As with many QoS approaches the QoS setting does not limit the amount of data the VM can receive. If such a limit is desired, Citrix recommends limiting the rate of incoming packets higher up in the network (for example, at the switch level).
Depending on networking stack configured in the pool, you can set the Quality of Service (QoS) value on VM virtual interfaces (VIFs) in one of two places-either a) on the vSwitch Controller or b) in XenServer (using theCLI or XenCenter).
|
Limited (no NetIOC)
vSphere 6.x Network I/O Control (NIOC) Version 3
- Ability to reserve bandwidth at a VMNIC
- Ability to reserve bandwidth at a vSphere Distributed Switch (VDS) Portgroup
Network I/O control enables you to specify quality of service (QoS) for network traffic in your virtualized environment. NetIOC requires the use of a virtual distributed switch (vDS). It allows to prioritize network by traffic type and the creation of custom network resource pools.
|
Yes (NetIOC), DSCP - NEW
vSphere 6.0 Network I/O Control (NIOC) Version 3
- Ability to reserve bandwidth at a VMNIC
- Ability to reserve bandwidth at a vSphere Distributed Switch (VDS) Portgroup
Network I/O control enables you to specify quality of service (QoS) for network traffic in your virtualized environment. NetIOC requires the use of a virtual distributed switch (vDS). It allows to prioritize network by traffic type and the creation of custom network resource pools. With vSphere 5.x (including 5.5) the following traffic types can be controlled:
- Fault Tolerance traffic
- iSCSI traffic
- vMotion traffic
- Management traffic
- vSphere Replication (VR) traffic
- NFS traffic
- Virtual machine traffic
You can apply shares for soft control and limits for hard capping (for outgoing/egress traffic only!). You can still combine these with the bi-directional hard traffic shaping function on the switch (port group) level e.g. to avoid overload due to multiple concurrent incoming vMotion traffics.
vSphere 5.5 introduced prioritizing traffic at layer 3 to increase Quality of Service support through the use of Differentiated Service Code Point (DSCP) - tagging to enable users to insert tags in the IP header in layer 3 (routing) environments. Physical routers function better with an IP header tag than with an Ethernet header tag (802.1p)
In addition vSphere 5 also introduced support for IEEE 802.1p tagging (a standard for enabling QoS at MAC level). In simple terms this tag allows you to extend basic QoS beyond the vSphere environment (end-to-end) by attaching a tag which is carried from source to target even outside the vSphere aware host (egress). The IEEE 802.1p allows packets to be grouped into seven different traffic classes and while not standardized, higher-number tags typically indicate critical traffic that has higher priority.
|
|
Traffic Monitoring
Details
|
Yes (Port Mirroring)
The XenServer vSwitch has Traffic Mirroring capabilities. The Remote Switched Port Analyzer (RSPAN) policies support mirroring traffic sent or received on
a VIF to a VLAN in order to support traffic monitoring applications. Use the Port Configuration tab in the vSwitch Controller UI to configure policies that apply to the VIF ports.
|
No
Port mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network-monitoring device connected to another switch port. Port mirroring is also referred to as Switch Port Analyzer (SPAN) on Cisco switches. Distributed Switch provides a similar port mirroring capability that is available on a physical network switch. After a port mirror session is configured with a destination -a virtual machine, a vmknic or an uplink port-the Distributed Switch copies packets to the destination.
|
yes (Port Mirroring)
Port mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network-monitoring device connected to another switch port. Port mirroring is also referred to as Switch Port Analyzer (SPAN) on Cisco switches. In VMware vSphere 5, a Distributed Switch provides a similar port mirroring capability that is available on a physical network switch. After a port mirror session is configured with a destination -a virtual machine, a vmknic or an uplink port-the Distributed Switch copies packets to the destination.
|
|
|
Hypervisor
|
|
|
|
|
|
|
General |
|
|
Hypervisor Details/Size
Details
|
XenServer 7.0: Xen 4.6 -based
NEW
XenServer is based on the open-source Xen hypervisor (XenServer 7.0 now runs on the Xen 4.6 hypervisor, provides GPT support and a smaller, more scalable Dom0 based on Centos 7.2). XenServer automatically scales the amount of memory allocated to the Control Domain (Dom0) based on the physical memory available.
XenServer uses paravirtualization and hardware-assisted virtualization, requiring either a modified guest OSs or hardware assisted CPUs (more commonly seen as its less restrictive and hardware assisted CPUs like Intel VT/AMD-V have become standard). The device drivers are provided through a 64-bit Linux-based guest (CentOS) running in a control virtual machine (Dom0).
|
Virtual Hardware version 13
VMware developed/proprietary bare-metal hypervisor, which with vSphere 5 onwards is only available as ESXi (small foot-print without Console OS). The hypervisor itself is < 150MB. Device drivers are provided with the hypervisor (not with the Console OS or dom0/parent partition as with Xen or Hyper-V technologies).
ESX is based on binary translation (full virtualization) but also uses aspects of para-virtualization (device drivers, VMware tools and the VMI interface for para-virtualization) and supports hardware assisted virtualization aspects.
|
Virtual Hardware version 11
VMware developed/proprietary bare-metal hypervisor, which with vSphere 5 onwards is only available as ESXi (small foot-print without Console OS). The hypervisor itself is < 150MB. Device drivers are provided with the hypervisor (not with the Console OS or dom0/parent partition as with Xen or Hyper-V technologies).
ESX is based on binary translation (full virtualization) but also uses aspects of para-virtualization (device drivers, VMware tools and the VMI interface for para-virtualization) and supports hardware assisted virtualization aspects.
vSphere 6.0 features the new virtual hardware version 11
- 128vCPUs
- 4TB of RAM (NUMA aware)
- VDDM 1.1 GDI acceleration
- xHCI 1.0 controller compatible with OS X 10.8 + xHCI driver.
Expanded support for the latest x86 chip sets, devices, and drivers. Added support for FreeBSD 10.0 and Asianux 4 SP3 guest operating systems.
vSphere 5.5 features the new virtual hardware version 10 that includes support for LSI SAS for Solaris 11, CPU enablement for the latest processors and AHCI Controller Support to better run Mac OS X (as it allows you to present a SCSI based CD-ROM device to the guest). This new virtual-SATA controller supports both virtual disks and CD-ROM devices that can connect up to 30 devices per controller, with a total of four controllers.
It also enables a new Latency Sensitivity setting that can be used to reduce virtual machine latency. When the Latency sensitivity is set to high the hypervisor will try to reduce latency in the virtual machine by reserving memory, dedicating CPU cores and disabling network features that are prone to high latency.
|
|
|
|
Host Config |
|
|
Max Consolidation Ratio
Details
|
1000 VMs per host
NEW
Citrix has listed the new scalability limits for XenServer 7.0 within the published configuration limits document http://bit.ly/2dtaur1
- Concurrent VMs per host (Windows or Linux): 1000
- Concurrent protected VMs per host with HA enabled: 500
Disclaimers are:
- The maximum number of VMs/host supported is dependent on VM workload, system load, and certain environmental factors. Citrix reserves the right to determine what specific environmental factors affect the maximum limit at which a system can function. For systems running over 500 VMs, Citrix recommends allocating 8GB RAM and 8 exclusively pinned vCPUs to dom0, and setting the OVS flow-eviction-threshold to 8192.
The maximum amount of logical physical processors supported differs by CPU. Please consult the XenServer Hardware Compatibility List for more details on the maximum amount of logical cores supported per vendor and CPU.
Each plugged VBD or plugged VIF or Windows VM reduces this number by 1
|
1024 VMs
vSphere 6.5 Maximums
- 64 hosts per cluster
- 8000 VMs per cluster
- 576 CPUs
- 12 TB of RAM
- 1024 VMs per host
|
2048 VMs
vSphere 6.0 Maximums
- 64 hosts per cluster (vSphere 5.x it was 32)
- 8000 VMs (previously 4000)
- 480 CPUs (vSphere 5.x it was 320 CPUs)
- 12 TB of RAM (vSphere 5.x it was 4 TB of RAM)
- 2048 VMs per host (vSphere 5.x it was 512 VMs).
vSphere 5.5 introduced a new maximum of 4096 virtual CPUs per host (i.e. when using vSMP), the previous limit of max 512 virtual machines / host remains. The maximum number of virtual CPUs per core has been increased to 32 (from 25 with ESX 5.1).
The most restrictive of these criteria applies - actually achievable maximum number depends obviously on workload characteristics and hardware configuration.
|
|
|
288 (logical)
NEW
XenServer 7.0 supports up to 288 Logical CPUs (threads) e.g. 8 socket with 18 cores each and Hyper-Threading enabled = 288 logical CPUs
|
576 physical
Hosts will support up to 576 physical CPUs (Dependent on hardware at launch time).
|
480 physical
Hosts will support up to 480 physical CPUs (Dependent on hardware at launch time).
|
|
Max Cores per CPU
Details
|
unlimited
The XenServer license does not restrict the number of cores per CPU
|
unlimited
unlimited (since vSphere 5)
|
unlimited
unlimited (since vSphere 5)
|
|
Max Memory / Host
Details
|
5TB
XenServer supports a maximum of 5TB per host, if a host has one or more paravirtualized guests (Linux VMs) running then a maximum of 128 GB RAM is supported on the host
|
12TB (EXCLUDING Reliable Memory Technology)
vSphere 6.5 Maximums
- 64 hosts per cluster
- 8000 VMs per cluster
- 576 CPUs
- 12 TB of RAM
- 1024 VMs per host
|
12TB
vSphere 6 Maximums
- 64 hosts per cluster (vSphere 5.x it was 32)
- 8000 VMs (previously 4000)
- 480 CPUs (vSphere 5.x it was 320 CPUs)
- 12 TB of RAM (vSphere 5.x it was 4 TB of RAM)
- 2048 VMs per host (vSphere 5.x it was 512 VMs).
Support for Reliable Memory Technology:
The vSphere ESXi Hypervisor runs directly in memory, an memory error can potentially crash it and the virtual machines running on the host. CPU hardware feature through which a region of memory is reported from the hardware to vSphere ESXi Hypervisor as being more “reliable. This information is then used to optimize the placement of the VMkernel and other critical components such as the initial thread, hostd and the watchdog process and helps guard against memory errors.
|
|
|
|
VM Config |
|
|
|
32
NEW
XenServer 7.0 now supports 32vCPUs for Windows and Linux VMs. Actual numbers vary with the guest OS version (e.g. license restrictions).
|
128 vCPU
128 vCPU
Maximum number of virtual CPUs configurable for the vm and presented to the guest operating system - supported numbers vary greatly with specific guest OS version, please check!
|
128 vCPU
NEW
128 vCPU (64 vCPU with 5.5)
Maximum number of virtual CPUs configurable for the vm and presented to the guest operating system - supported numbers vary greatly with specific guest OS version, please check!
|
|
|
1.5TB
NEW
A maximum of 1.5TB is supported for guest Oss, actual number varies greatly with guest OS version so please check for specific guest support.
The maximum amount of physical memory addressable by your operating system varies. Setting the memory to a level greater than the operating system supported limit, may lead to performance issues within your guest. Some 32-bit Windows operating systems can support more than 4 GB of RAM through use of the physical address extension (PAE) mode. The limit for 32-bit PV Virtual Machines is 64GB. Please consult your guest operating system Administrators Guide and the XenServer Virtual Machine Users Guide for more details.
|
6TB
NEW
6TB
amount of vRAM configurable in a virtual machine (presented to the guest OS)
|
4TB
NEW
4TB (1TB with 5.5)
amount of vRAM configurable in a virtual machine (presented to the guest OS)
|
|
|
No
You can not configure serial ports (as virtual hardware) for your VM
|
32 (no vSPC)
NEW
32 ports
Virtual machine serial ports can connect to physical host port, output file, named pipes or network.
|
32 ports
NEW
32 ports (4 ports with 5.5)
Virtual machine serial ports can connect to physical host port, output file, named pipes or network.
Maximum number of serial ports for vm: 32
This edition includes support for virtual Serial Port Concentrators: redirect virtual machine serial ports over a standard network link to third-party virtual serial port concentrators which maintain serial connections with IP enabled serial devices when migrating vm between hosts (e.g. used for traditional serial console and monitoring solutions)
EnterPrise Plus, Enterprise only; Standard - n/a
|
|
|
No (except mass storage)
XenServer doesn’t natively support USB passthrough of anything but mass storage devices.
|
Yes (USB 1.x, 2.x and 3.x) with max 20 USB devices per vm
vSphere supports a USB (host) controller per virtual machine. USB 1.x, 2.x and 3.x supported. One USB host controller of each version 1.x, 2.x, or 3.x can be added at the same time.
A maximum of 20 USB devices can be connected to a virtual machine (Guest operating systems might have lower limits than allowed by vSphere)
|
Yes (USB 1.x, 2.x and 3.x) with max 20 USB devices per vm
vSphere supports a USB (host) controller per virtual machine. USB 1.x, 2.x and 3.x supported. One USB host controller of each version 1.x, 2.x, or 3.x can be added at the same time.
A maximum of 20 USB devices can be connected to a virtual machine (Guest operating systems might have lower limits than allowed by vSphere)
vSphere 5 introduced limited support for USB3.0 devices (pass-through from client device to Linux guests only). Previous USB versions are fully supported for Windows and Linux guests and pass-through from client and host to the virtual machines.
|
|
|
Yes (disk, NIC)
XenServer supports adding of disks and network adapters while the VM is running - hot plug requires the specific guest OS to support these functions - please check for specific support with your OS vendor.
|
Yes (CPU, Mem, Disk, NIC, PCIe SSD)
vSphere adds the ability to perform hot-add and remove of SSD devices to/from a vSphere host.
VMware Hot add (Memory and CPU) and hot plug (NIC, disks) requires the guest OS to support these functions - please check for specific support.
|
Yes (CPU, Mem, Disk, NIC, PCIe SSD)
vSphere adds the ability to perform hot-add and remove of SSD devices to/from a vSphere host.
VMware Hot add (Memory and CPU) and hot plug (NIC, disks) requires the guest OS to support these functions - please check for specific support. vSphere 5 added support for hot-plug of multi-core CPUs (virtual machine hardware v8)
|
|
Graphic Acceleration
Details
|
Yes
NEW
Citrix XenServer is leading the way in the virtual delivery of 3D professional graphics applications and workstations. Its offerings include GPU Pass-through (for NVIDIA, AMD and Intel GPUs) as well as hardwarebased
GPU sharing with NVIDIA GRID™ vGPU™ and Intel GVT-g™.
Details: http://bit.ly/2dCZV1z
|
No
GRID vGPU is a graphics acceleration technology from NVIDIA that enables a single GPU (graphics processing unit) to be shared among multiple virtual desktops. When NVIDIA GRID cards (installed in an x86 host) are used in a desktop virtualization solution running on VMware vSphere® 6.x, application graphics can be rendered with superior performance compared to non-hardware-accelerated environments. This capability is useful for graphics-intensive use cases such as designers in a manufacturing setting, architects, engineering labs, higher education, oil and gas exploration, clinicians in a healthcare setting, as well as for power users who need access to rich 2D and 3D graphical interfaces.
|
Yes (NVIDIA vGPU)
NEW
GRID vGPU is a graphics acceleration technology from NVIDIA that enables a single GPU (graphics processing unit) to be shared among multiple virtual desktops. When NVIDIA GRID cards (installed in an x86 host) are used in a desktop virtualization solution running on VMware vSphere® 6.0, application graphics can be rendered with superior performance compared to non-hardware-accelerated environments. This capability is useful for graphics-intensive use cases such as designers in a manufacturing setting, architects, engineering labs, higher education, oil and gas exploration, clinicians in a healthcare setting, as well as for power users who need access to rich 2D and 3D graphical interfaces.
|
|
|
|
Memory |
|
|
Dynamic / Over-Commit
Details
|
Yes (DMC)
XenServer 5.6 introduced Dynamic Memory Control (DMC) that enables dynamic reallocation of memory between VMs. This capability is maintained in 6.x.
XenServer DMC (sometimes known as 'dynamic memory optimization', 'memory overcommit' or 'memory ballooning') works by automatically adjusting the memory of running VMs, keeping the amount of memory allocated to each VM between specified minimum and maximum memory values, guaranteeing performance and permitting greater density of VMs per server. Without DMC, when a server is full, starting further VMs will fail with 'out of memory' errors: to reduce the existing VM memory allocation and make room for more VMs you must edit each VMs memory allocation and then reboot the VM. With DMC enabled, even when the server is full, XenServer will attempt to reclaim memory by automatically reducing the current memory allocation of running VMs within their defined memory ranges.
|
Yes (Memory Ballooning)
vSphere uses several memory optimization techniques, mainly to over-commit memory and reclaim unused memory: Ballooning, memory compression and transparent page sharing. Last level of managing memory overcommit is hypervisor swapping (not desired).
When physical host memory is over-committed (e.g. the host has a total of 128GB of RAM but a total of 196GB are allocated to virtual machines), the memory balloon driver (vmmemctl) collaborates with the server to reclaim pages that are considered least valuable by the guest operating system. When memory is tight (i.e. all virtual machines are requesting their maximum memory allocation to be used), the guest operating system determines which pages to reclaim and, if necessary, swaps them to its own virtual disk.
|
Yes (Memory Ballooning)
vSphere uses several memory optimization techniques, mainly to over-commit memory and reclaim unused memory: Ballooning, memory compression and transparent page sharing. Last level of managing memory overcommit is hypervisor swapping (not desired).
When physical host memory is over-committed (e.g. the host has a total of 128GB of RAM but a total of 196GB are allocated to virtual machines), the memory balloon driver (vmmemctl) collaborates with the server to reclaim pages that are considered least valuable by the guest operating system. When memory is tight (i.e. all virtual machines are requesting their maximum memory allocation to be used), the guest operating system determines which pages to reclaim and, if necessary, swaps them to its own virtual disk.
|
|
Memory Page Sharing
Details
|
No
XenServer does not feature any transparent page sharing algorithm.
|
Yes (Transparent Page Sharing)
vSphere uses several memory techniques: Ballooning, memory compression and transparent page sharing. Last level of managing memory overcommit is hypervisor swapping (not desired).
A good example is a scenario where several virtual machines are running instances of the same guest operating system, have the same applications or components loaded, or contain common data. In such cases, a host uses a proprietary transparent page sharing technique to securely eliminate redundant copies of memory pages. As a result, higher levels of over-commitment can be supported.
|
Yes (Transparent Page Sharing)
vSphere uses several memory techniques: Ballooning, memory compression and transparent page sharing. Last level of managing memory overcommit is hypervisor swapping (not desired).
A good example is a scenario where several virtual machines are running instances of the same guest operating system, have the same applications or components loaded, or contain common data. In such cases, a host uses a proprietary transparent page sharing technique to securely eliminate redundant copies of memory pages. As a result, higher levels of over-commitment can be supported.
|
|
|
No
There is no support for large memory pages in XenServer
|
Yes
Large Memory Pages for Hypervisor and Guest Operating System - in addition to the usual 4KB memory pages ESX also makes 2MB memory pages available.
|
Yes
Large Memory Pages for Hypervisor and Guest Operating System - in addition to the usual 4KB memory pages ESX also makes 2MB memory pages available.
|
|
HW Memory Translation
Details
|
Yes
Yes, XenServer supports Intel EPT and AMD-RVI, see http://bit.ly/2dtgHmP
|
Yes
Yes, vSphere leverages AMD RVI and Intel EPT technology for the MMU virtualization in order to reduce the virtualization overhead associated with page-table virtualization
|
Yes
Yes, vSphere leverages AMD RVI and Intel EPT technology for the MMU virtualization in order to reduce the virtualization overhead associated with page-table virtualization
|
|
|
|
Interoperability |
|
|
|
Yes, incl. vApp
XenServer 6 introduced the ability to create multi-VM and boot sequenced virtual appliances (vApps) that integrate with Integrated Site Recovery and High Availability. vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard. There is full support for VM disk and OVF appliance imports directly from XenCenter with the ability to change VM parameters (virtual processor, virtual memory, virtual interfaces, and target storage repository) with the Import wizard. Full OVF import support for XenServer, XenConvert and VMware.
|
Yes
vCenter can export and import vm, virtual appliances and vApps stored in OVF. vApp is a container comprised of one or more virtual machines, which uses OVF to specify and encapsulate all its components and policies
|
Yes
vCenter can export and import vm, virtual appliances and vApps stored in OVF. vApp is a container comprised of one or more virtual machines, which uses OVF to specify and encapsulate all its components and policies
|
|
|
Improving
XenServer has an improving HCL featuring the major vendors and technologies but compared to e.g. VMware and Microsoft the list is somewhat limited - so check support first. Links to XenServer HCL and XenServer hardware verification test kits are here: http://hcl.xensource.com/
|
Very Comprehensive (see link)
vSphere has a very comprehensive and well documented set of hardware components.
For compatible systems and devices see http://www.vmware.com/resources/compatibility/search.php
|
Very Comprehensive (see link)
vSphere has a very comprehensive and well documented set of hardware components.
For compatible systems and devices see http://www.vmware.com/resources/compatibility/search.php
|
|
|
Good
NEW
All major:
- Microsoft Windows 10
- Microsoft Windows Server 2012 (Server 2016 not yet released by Microsoft at time of writing
- Ubuntu 14.14, 16.04
- SUSE Linux Enterprise Server 11 SP3, SLED 11SP3, SLES 12, SLED 12, SLED 12 SP1
- Scientific Linux 5.11, 6.6, 7.0, 7.1, 7.2
- Red Hat Enterprise Linux (RHEL) 5.10, 5.11, 6.5, 6.6, 7.0, 7.1, 7.2
- Oracle Enterprise Linux (OEL) 5.10, 5.11, 6.5, 6.6, 7.0, 7.1, 7.2
- Oracle UEK 6.5
- CentOS 5.10, 5.11, 6.5, 6.5, 7.0, 7.1
- Debian 7.2, 8.0
Refer to the XenServer 7.0 Virtual Machine User Guide for details: http://bit.ly/29xyuIQ
|
Very Comprehensive (see link)
Very comprehensive - vSphere 6.5 is compatible with various versions of: Asianux, Canonical, CentOS, Debian, FreeBSD, OS/2, Microsoft (MS-DOS, Windows 3.1, 95, 98, NT4, 2000, 2003, 2008, 2012 incl. R2, XP, Vista, Win7, 8), Netware, Oracle Linux, SCO OpenServer, SCO Unixware, Solaris, RHEL, SLES and Apple OS X server.
Details: http://www.vmware.com/resources/compatibility/search.php?action=base&deviceCategory=software
|
Very Comprehensive (see link)
Very comprehensive - vSphere 5.5 is compatible with various versions of: Asianux, Canonical, CentOS, Debian, FreeBSD, OS/2, Microsoft (MS-DOS, Windows 3.1, 95, 98, NT4, 2000, 2003, 2008, 2012 incl. R2, XP, Vista, Win7, 8), Netware, Oracle Linux, SCO OpenServer, SCO Unixware, Solaris, RHEL, SLES and Apple OS X server.
Details: http://www.vmware.com/resources/compatibility/search.php?action=base&deviceCategory=software
|
|
Container Support
Details
|
Yes
Docker support for XenServer is a feature of XenServer 6.5 SP1 or later and is delivered as a supplimental pack named 'xs-container' which also includes support for CoreOS and cloud-drives.
More info: http://bit.ly/2mM2vde
|
No
NEW
vSphere Integrated Containers
|
|
|
|
Yes (SDK, API, PowerShell)
The Software Development Kit provides the architectural overview of the APIs and use of SDK tools provided: http://docs.citrix.com/content/dam/docs/en-us/xenserver/xenserver-7-0/downloads/xenserver-7-0-sdk-guide.pdf
The XenServer 7.0 management API is documented in detail here: http://bit.ly/2dqNbev
|
Web Services API/SDK, CIM, Perl, .NET, Java SDKs, Client Plug-In API, vSphere Clip, vMA
VMware provides several public API and Software Development Kits (SDK) products. You can use these products to interact with the following areas:
- host configuration, virtualization management and performance monitoring (vSphere Web Services API provides the basis for VMware management tools - available through the vSphere Web Services SDK). VMware provides language-specific SDKs (vSphere SDKs for Perl, .NET, or Java)
- server hardware health monitoring and storage management (CIM interface compatible with the CIM SMASH specification, storage management through CIM SMI-S and OEM/IHV packaged CIM implementations)
- extending the vSphere Client GUI (vSphere Client Plug-In API)
- access and manipulation of virtual storage - VMware Virtual Disk Development Kit (VDDK with library of C functions and example apps in C++)
- obtaining statistics from the guest operating system of a virtual machine (vSphere Guest SDK is a read-only programmatic interface for monitoring virtual machine statistics)
- scripting and automating common administrative tasks (CLIs that allow you to create scripts to automate common administrative tasks. The vSphere CLI is available for Linux and Microsoft Windows and provides a basic set of administrative commands. vSphere PowerCLI is available on Microsoft Windows and has over 200 commonly-used administrative commands.
Details Here: https://communities.vmware.com/community/vmtn/developer/
|
Web Services API/SDK, CIM, Perl, .NET, Java SDKs, Client Plug-In API, vSphere Clip, vMA
VMware provides several public API and Software Development Kits (SDK) products. You can use these products to interact with the following areas:
- host configuration, virtualization management and performance monitoring (vSphere Web Services API provides the basis for VMware management tools - available through the vSphere Web Services SDK). VMware provides language-specific SDKs (vSphere SDKs for Perl, .NET, or Java)
- server hardware health monitoring and storage management (CIM interface compatible with the CIM SMASH specification, storage management through CIM SMI-S and OEM/IHV packaged CIM implementations)
- extending the vSphere Client GUI (vSphere Client Plug-In API)
- access and manipulation of virtual storage - VMware Virtual Disk Development Kit (VDDK with library of C functions and example apps in C++)
- obtaining statistics from the guest operating system of a virtual machine (vSphere Guest SDK is a read-only programmatic interface for monitoring virtual machine statistics)
- scripting and automating common administrative tasks (CLIs that allow you to create scripts to automate common administrative tasks. The vSphere CLI is available for Linux and Microsoft Windows and provides a basic set of administrative commands. vSphere PowerCLI is available on Microsoft Windows and has over 200 commonly-used administrative commands.
Details Here: https://communities.vmware.com/community/vmtn/developer/
vMA is a Linux-based virtual machine that is pre-installed with a command-line interface and select third-party agents needed to manage your vSphere infrastructure. Administrators and developers can use vMA to run scripts and agents to manage vSphere 5.5, vSphere 5.1 and later, vSphere 5.0 and later systems. vMA includes the vSphere SDK for Perl and the vSphere Command-Line Interface (vSphere CLI). vMA also includes an authentication component allowing direct connection to established target servers without user intervention.
|
|
|
CloudStack APIs, support for AWS API and OpenStack
Citrix CloudPlatform uses a RESTful CloudStack API. In addition to supporting the CloudStack API, CloudPlatform supports the Amazon Web Services (AWS) API. Future cloud API standards from bodies such as the Distributed Management Task Force (DMTF) will be implemented as they become available.
Details on the CloudPlatform API here: http://bit.ly/2djimIj
|
No (vCloud API)
vCloud API - provides support for developers who are building interactive clients of VMware vCloud Director using a RESTful application development style.
VMware provides a comprehensive vCloud API Programming Guide. vCloud API clients and vCloud Director servers communicate over HTTP, exchanging representations of vCloud objects. These representations take the form of XML elements. You use HTTP GET requests to retrieve the current representation of an object, HTTP POST and PUT requests to create or modify an object, and HTTP DELETE requests to delete an object.
The guide is intended for software developers who are building VMware Ready Cloud Services, including interactive clients of VMware vCloud Director. The guide discusses Representational State Transfer (REST) and RESTful programming conventions, the Open Virtualization Format Specification, and VMware Virtual machine technology.
|
vCloud API
vCloud API - provides support for developers who are building interactive clients of VMware vCloud Director using a RESTful application development style.
VMware provides a comprehensive vCloud API Programming Guide. vCloud API clients and vCloud Director servers communicate over HTTP, exchanging representations of vCloud objects. These representations take the form of XML elements. You use HTTP GET requests to retrieve the current representation of an object, HTTP POST and PUT requests to create or modify an object, and HTTP DELETE requests to delete an object.
The guide is intended for software developers who are building VMware Ready Cloud Services, including interactive clients of VMware vCloud Director. The guide discusses Representational State Transfer (REST) and RESTful programming conventions, the Open Virtualization Format Specification, and VMware Virtual machine technology.
The vCloud API Programming Guide for vCloud Director 5.5 can be found here: http://pubs.vmware.com/vcd-55/topic/com.vmware.ICbase/PDF/vcd_55_api_guide.pdf
|
|
|
Extensions
|
|
|
|
|
|
|
Cloud |
|
|
|
CloudPlatform; OpenStack
NEW
Note: Due to the variation in actual cloud requirements, different deployment model (private, public, hybrid) and use cases (IaaS, PaaS etc) the matrix will only list the available products and capabilities. It will not list evaluations (green, amber, red) rather than providing the information that will help you to evaluate it for YOUR environment.
After the acquisition of Cloud.com in July 2011, Citrix has centered its cloud capabilities around its CloudPlatform suite.
Citrix CloudPlatform (latest release 4.5.1) powered by Apache CloudStack - is an open source cloud computing platform that pools computing resources to build public, private, and hybrid Infrastructure as a Service (IaaS) clouds. CloudPlatform manages the network, storage, and compute nodes that make up a cloud infrastructure. Use CloudPlatform to deploy, manage, and configure cloud computing environments.
|
No
VMware Cloud Foundation is the unified SDDC platform that brings together VMware’s vSphere, vSAN and NSX into a natively integrated stack to deliver enterprise-ready cloud infrastructure for the private and public cloud.
https://www.vmware.com/products/cloud-foundation.html
|
vCloud Suite (private), vCloud Hybrid Service (hybrid/public), Pivotal Spin-off
Due to the variation in actual cloud requirements, different deployment model (private, public, hybrid) and use cases (IaaS, PaaS etc.) the matrix will only list the available products and capabilities.
It will not list evaluations (green, amber, red) rather than providing the information that will help you to evaluate it for YOUR environment.
Please NOTE: You can use the Create & Print Report button from the main many to add your own evaluation to the matrix content and print your individual Custom Report.
Based on the foundation of the successful vSphere virtualization products, VMware has evolved its portfolio to develop comprehensive cloud offerings.
As part of their Software Defined Datacenter (SDDC) initiative VMware offers two main options for providing cloud environments:
1) Build a PRIVATE cloud infrastructure with VMware vCloud Suite (packaged software that customers deploy on premises) - http://bit.ly/15dpXgr
2) Rent a PUBLIC or HYBRID cloud service with VMware vCloud Hybrid Service (public Infrastructure-as-a-service based on VMware SDDC technologies and operated by VMware vCloud Hybrid Service or its partners) - http://bit.ly/15dpU4s
The above is complemented by VMwares Cloud Operations Services that are aimed to provide insight, prioritized recommendations, and expert guidance to transform operational processes and organizational structures when moving to the cloud. http://bit.ly/15dpYks
Comment: The above categorization is in reality too simplistic and hides away some of the portfolio challenges VMware faces. The vCloud Suite contains both, vCloud Director (vCD) as well as vCloud Automation Center (vCAC - Dynamic Ops acquisition).
vCAC arguably contains strong hybrid capabilities (connecting multi-vendor clouds including vCD instances and Amazon EC2). Additionally there are question marks on the future positioning of vCloud Director with increasing directions by VMware (and customer demand) to use the vCAC product for Enterprise workloads. There is a good positioning blog with guidance on vCAC vs. vCD at: http://bit.ly/1eQ3Bd2
VMware also has/had a number of technologies focusing on the Platform as a Service aspects, including Cloud Foundry, Spring, and the vFabric middleware solutions.
In April 2013 VMware announced that Pivotal, a new venture started by VMware and EMC, will be focusing on further developing these capabilities. Formally launched as a stand-alone entity today, Pivotal is led by former VMware CEO Paul Maritz, and announced Pivotal One, the name of Pivotals next-generation Enterprise PaaS that will integrate new data fabrics, modern programming frameworks, cloud portability and support for legacy systems.
The first release of Pivotal One is expected Q4 2013.
|
|
|
|
Desktop Virtualization |
|
|
|
Citrix Desktop Virtualization (XenDesktop & XenApp 7.6 - NEW; ViaB; associated products)
Citrix is by many perceived to own the most comprehensive portfolio for desktop virtualization alongside with the largest overall market share.
Citrixs success in this space is historically based on its Terminal Services-like capabilities (Hosted SHARED Desktops i.e. XenApp aka Presentation Server) but Citrix has over time added VDI (Hosted Virtual Desktops), mobility management (XenMobile), networking (NetScaler), cloud for Service Providers hosting desktop/apps (CloudPlatform) and other comprehensive capabilities to its portfolio (separate fee-based offerings).
Citrixs FlexCast approach promotes the any type of virtual desktop to any device philosophy and facilitates the use of different delivery technologies (e.g. VDI, application publishing or streaming, client hypervisor etc.) depending on the respective use case (so not a one fits all approach).
XenDesktop 7.x history:
- Citrixs announcement of Project Avalon in 2011 promised the integration of a unified desktop / application virtualization capability into its CloudPlatform product. This was then broken up into the Excalibur Project (unifying XenDesktop and XenApp in the XenDesktop 7.x product) and the Merlin Release aiming to provide multi-tenant clouds to manage virtual desktops and applications.
- XenDesktop 7.1 added support for Windows Server 2012 R2 and Windows 8.1, and new Studio configuration of server-based graphical processing units (GPUs) considered an essential hardware component for supporting virtualized delivery of 3D professional graphics applications.
- In Jan 2014 Citrix announced that XenApp is back as product name, rather than using XenDesktop to refer to VDI as well as desktop/application publishing capabilities, also see http://gtnr.it/14KYg4b
- With XenDesktop 7.5 Citrix announced the capability to provision application and or desktop workloads to public and or private cloud infrastructures (Citrix CloudPlatform, Amazon and (later) Windows Azure. Wake-on-LAN capability has been added to Remote PC Access and AppDNA is now included in the product.
-With XenDesktop 7.6 includes new features like: session prelaunch and session linger; support for unauthenticated (anonymous) users and connection leasing makes recently used applications and desktops available even when the Site database in unavailable.
VDI in a Box: Citrix also has VDI in a Box (ViaB) offering (originating in the Kaviza acquisition) - a simple to deploy, easy to manage and scale VDI solution targeting smaller deployments and limited use cases.
In reality ViaB box scales to larger (thousaunds of users) environments but has (due to its simplified nature and product positioning) restricted use cases compared to the full XenDesktop (There is no direct migration path between ViaB and XenDesktop). ViaB can for instance not provide advanced Hosted Shared Desktops (VDI only), no advanced graphics capabilities (HDX3DPro), has limited HA for fully persistent desktops, no inherent multi-site management capabilities.
Overview here: http://bit.ly/1fXeA38
Recommended Read for VDI Comparison (Ruben Spruijts VDI Smackdown): http://www.pqr.com/downloadformulier?file=VDI_Smackdown.pdf
|
VMware Horizon 7 (Vendor Add-On)
VMware Horizon 7
Deliver virtual or hosted desktops and applications through a single platform with VMware Horizon 7
http://www.vmware.com/products/horizon.html
|
VMware Horizon Suite (Vendor Add-On)
Horizon 6 announced: http://blogs.vmware.com/euc/2014/04/vmware-horizon-6-unveiled-today.html
Whats new?
- Remote Desktop Session Host (RDS) Hosted Apps (support for applications and full desktops running on Microsoft Remote Desktop Services Hosts).
- Cloud Pod Architecture (scale View aboce 10k users, across DCs etc)
- Virtual SAN (vSAN added for free in the Horizon 6 Advanced and Enterprise Edition)
- Application Catalog
- vCops for View 6
The repective sections will be updated with Horizon View 6 information when the products becomes generally available (expected Q2/14)
VMware released the VMware Horizon Suite 5 in March 2013, essentially bringing VMware View, Mirage and Horizon alongside new capabilities together under the umbrella of a new product suite:
- Horizon View (aka VMware View) for virtual desktop delivery
- Horizon Mirage (aka VMware Mirage) for centralized (image) management of physical desktops or virtual desktops on Fusion (Mac or Linux) - NO support for VMware View virtual desktops for the initial release.
- Horizon Workspace (evolution of Horizon Application manager) providing access to applications and data on mobile device or computers
Please note that the Horizon Suite is a fee-based Add-On
Recommended Read: VDI Comparison (Ruben Spruijts VDI Smackdown): http://www.pqr.com/downloadformulier?file=VDI_Smackdown.pdf
|
|
|
|
1 |
|
|
|
no
There is no specific Storage Virtualization appliance capability other than the abstraction of storage resources through the hypervisor.
|
vSAN 6.5 (Vendor Add-On)
VMware vSAN 6.5 extend virtualization to storage with an integrated hyper-converged solution.
http://www.vmware.com/products/virtual-san.html
comparison: https://www.whatmatrix.com/comparison/SDS-and-HCI
|
Vendor Add-On: Virtual SAN
Virtual SAN 6.0 delivers new all-flash architecture on flash devices to deliver high, predictable performance and sub-millisecond response times for some of the most demanding enterprise applications.
whats new:
- Support for All-Flash configurations
- Fault Domains configuration
- Support for hardware encryption and checksum (See HCL)
- New on-disk format
- High performance snapshots / clones
- 32 snapshots per VM
- Scale
- 64 host cluster support
- 40K IOPS per host for hybrid configurations
- 90K IOPS per host for all-flash configurations
- 200 VMs per host
- 8000 VMs per Cluster
- up to 62TB VMDKs
- Default SPBM Policy
- Disk / Disk Group serviceability
- Support for direct attached storage systems to blade (See HCL)
- Virtual SAN Health Service plugin
vSphere Requirements
Virtual SAN 6.0 requires VMware vCenter Server 6.0. Both the Microsoft Windows version of vCenter Server and the VMware vCenter Server Appliance can manage Virtual SAN. Virtual SAN 6.0 is configurable and monitored exclusively from only VMware vSphere Web Client.
Virtual SAN requires a minimum of three vSphere hosts contributing local storage capacity in order to form a supported cluster. The minimum, three-host, configuration enables the cluster to meet the lowest availability requirement of tolerating at least one host, disk, or network failure. The vSphere hosts require vSphere version 6.0 or later.
Disk Controlers
Each vSphere host that contributes storage to the Virtual SAN cluster requires a disk controller. This can be a SAS or SATA host bus adapter (HBA) or a RAID controller. However, the RAID controller must function in one of two modes:
- Pass-through mode
- RAID 0 mode
Network Interface Cards (NIC)
In Virtual SAN hybrid architectures each vSphere host must have at least one 1Gb Ethernet or 10Gb Ethernet capable network adapter. VMware recommends 10Gb.
The All-flash architectures are only supported with 10Gb Ethernet capable network adapters. For redundancy and high availability, a team of network adapters can be configured on a per-host basis. The teaming of network adapters for link aggregation (performance) is not supported. VMware considers this to be a best practice but not necessary in building a fully functional Virtual SAN cluster.
|
|
|
|
2 |
|
|
Application Management
Details
|
Vendor Add-On: XenDesktop
EdgeSight, Director
The performance monitoring and trouble shooting aspect of Application Management in the context of Citrix is mostly applicable to XenDesktop 7 Director and XenDesktop 7 EdgeSight
- Desktop Director is a real-time web tool, used primarily for the Help Desk agents. In XenDesktop 7, Directors new troubleshooting dashboard provides the real-time health monitoring of your XenDesktop 7 site.
- In XenDesktop 7, EdgeSight provides two key features, performance management and network analysis.
With XenDesktop 7, Director (the real-time assessment and troubleshooting tool is included in all XenDesktop 7 editions.
The new EdgeSight features are included in both XenApp and XenDesktop Platinum editions entitlements however these features are based on the XenDesktop 7 platform. The environment must be XenDesktop 7 in order to leverage the new Director and EdgeSight features.
EdgeSight network analysis also requires NetScaler Enterprise or Platinum edition. With NetScaler Enterprise, real-time data for the last 60 minutes is provided. NetScaler Platinum edition has an unlimited data retention. To summarize,
How do you get it?
With XenDesktop 7, Director is included in all XenDesktop 7 editions. The new EdgeSight features are included in both XenApp and XenDesktop Platinum editions entitlements however these features are based on the XenDesktop 7 platform.
- All editions: Director - real-time monitoring and basic troubleshooting (up to 7 days of data)
- XD7 Platinum: EdgeSight performance management feature - includes #1 + historical monitoring (up to a full year of data through the monitoring SQL database)
- XD7 Platinum + NetScaler Enterprise: EdgeSight performance management and network analysis - includes #2 plus 60 mins. of network data
- XD7 Platinum + NetScaler Platinum: EdgeSight performance management and network analysis - includes #2 plus unlimited network data
http://bit.ly/17toPr8
Citrix EdgeSight is a performance and availability management suite for XenApp, Presentation Server, XenDesktop and endpoint systems (through agents running on physical systems or virtualized platforms). Citrix EdgeSight monitors applications, sessions, devices, and the network in real time. Details here: http://bit.ly/2cMOcMO
EdgeSight for NetScaler is an agent-less solution which provides real-time user performance monitoring specifically for web applications based upon actual user experience (response time). It provides both real-time and historical views to proactively identify potential problems. (Citrix NetScaler is an application switch - a physical or virtual appliance - that intelligently distributes, optimizes, and secures network traffic for Web applications. Features include load balancing, compression, Secure Sockets Layer (SSL) offload, a built-in application firewall, and dynamic content caching.) Details here: http://bit.ly/2cSLwiN
|
App Volumes
App Volumes is a portfolio of application and user management solutions for Horizon, Citrix XenApp and XenDesktop, and RDSH virtual environments.
https://www.vmware.com/products/appvolumes.html
|
VMware vFabric Hyperic - (disc.)
VMware announced the End of Availability of VMware vFabric Application Performance Manager, effective 06/01/2013.
The vFabric Hyperic component will continue to be available as a standalone product or through the vCenter Ops Advanced or Enterprise edition.
vFabric Hyperic monitors operating systems, middleware and applications running in physical, virtual and cloud environments:
Fast and Easy Web Infrastructure Monitoring
- Auto-discover over 120 middleware and applications, complete with pre-configured best practice collection for key performance indicators (KPIs) to accelerate monitoring setup.
- run-book deployment automation, including the ability to copy and reuse monitoring configurations and alert policies to quickly bring resources under management.
- Comprehensive monitoring for performance, configuration and security changes correlated in an easy to read user interface to enable quick root cause analysis (RCA).
- Advanced alerting and escalation workflows to reduce alert duplication, irrelevant alerts, and false alarms by setting alert condition definitions on a wide range of performance metrics.
- Administrative actions such as restarting servers or running garbage collection can be scheduled or run in response to an alert condition, or manually performed via Hyperics web interface.
- Role-based alerting to assign problems to appropriate owners; Role-based security
- AJAX dashboard
- Manage applications across physical and cloud-based infrastructures
- Extended analysis tools include advanced dashboard charting, capacity planning, base lining and built-in reporting
- Enterprise-level scalability and high-availability deployment options
- Universally extensible, vFabric Hyperic API aggregates all subsystem-specific functionality
- Hyperics Management Plugin Framework lets you build your own plugins to support ny unsupported device, HQU Plugin framework lets you create custom user interfaces to present performance data
|
|
|
|
3 |
|
|
|
Vendor Add-Ons: NetScaler Gateway, App Firewall, CloudBridge, Direct Inspection API
NEW
NetScaler provides various (network) security related capabilities through e.g.
- NetScaler Gateway: secure application and data access for Citrix XenApp, Citrix XenDesktop and Citrix XenMobile)
- NetScaler AppFirewall: secures web applications, prevents inadvertent or intentional disclosure of confidential information and aids in compliance with information security regulations such as PCI-DSS. AppFirewall is available as a standalone security appliance or as a fully integrated module of the NetScaler application delivery solution and is included with Citrix NetScaler, Platinum Edition.
Details here: http://bit.ly/17ttmKk
CloudBridge:
Initially marketed under NetScaler CloudBridge, Citrix CloudBridge provides a unified platform that connects and accelerates applications, and optimizes bandwidth utilization across public cloud and private networks.
CloudBridge encrypts the connection between the enterprise premises and the cloud provider so that all data in transit is secure.
http://bit.ly/17ttSYA
|
NSX (Vendor Add-On)
VMware NSX is the network virtualization platform for the Software-Defined Data Center.
http://www.vmware.com/products/nsx.html
NSX embeds networking and security functionality that is typically handled in hardware directly into the hypervisor. The NSX network virtualization platform fundamentally transforms the data center’s network operational model like server virtualization did 10 years ago, and is helping thousands of customers realize the full potential of an SDDC.
With NSX, you can reproduce in software your entire networking environment. NSX provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.
|
NSX (Vendor Add-On)
VMware NSX solves these data center challenges by delivering a completely new operational model for networking. This model breaks through current physical network barriers and allows data center operators to achieve orders of magnitude better agility and economics.
VMware NSX exposes a complete suite of simplified logical networking elements and services including logical switches, routers, firewalls, load balancers, VPN, QoS, monitoring, and security. These services are provisioned in virtual networks through any cloud management platform leveraging the NSX APIs and can be arranged in any topology with isolation and multi-tenancy. Virtual networks are deployed non-disruptively over any existing network and on any hypervisor.
Key Features of NSX
• Logical Switching – Reproduce the complete L2 and L3 switching functionality in a virtual environment, decoupled from underlying hardware
• NSX Gateway – L2 gateway for seamless connection to physical workloads and legacy VLANs
• Logical Routing – Routing between logical switches, providing dynamic routing within different virtual networks.
• Logical Firewall – Distributed firewall, kernel enabled line rate performance, virtualization and identity aware, with activity monitoring
• Logical Load Balancer – Full featured load balancer with SSL termination.
• Logical VPN – Site-to-Site & Remote Access VPN in software
• NSX API – RESTful API for integration into any cloud management platform
NSX Use Cases
NSX is the ideal solution for data centers with more than 500 virtual machines. NSX delivers immediate benefits for innovative multi-tenant cloud service providers, large enterprise private and R&D clouds and multi-hypervisor cloud environments.
|
|
|
|
4 |
|
|
Workflow / Orchestration
Details
|
Workflow Studio (incl.)
Workflow Studio is included with this license and provides a graphical interface for workflow composition in order to reduce scripting. Workflow Studio allows administrators to tie technology components together via workflows. Workflow Studio is built on top of Windows® PowerShell and Windows Workflow Foundation. It natively supports Citrix products including XenApp, XenDesktop, XenServer and NetScaler.
Available as component of XenDesktop suite, Workflow Studio was retired in XenDesktop 7.x
|
vRealize Orchestrator
vRealize Orchestrator is included with vCenter Server Standard and allows admins to capture often executed tasks/best practices and turn them into automated workflows (drag and drop) or use out of the box workflows.
http://www.vmware.com/products/vrealize-orchestrator.html
|
Orchestrator (incl.)
vCenter Orchestrator is included with vCenter Server Standard and allows vCenter Orchestrator allows admins to capture often executed tasks/best practices and turn them into automated workflows (drag and drop) or use out of the box workflows. An increasing number of plug-ins is being developed to enable automation of tasks related to related products, e.g. vCloud Director and Microsoft Active Directory
VMware has also (separately priced) cloud related orchestration tools like vCloud Request Manager.
http://www.vmware.com/products/vcenter-orchestrator/buy.html
|
|
|
|
5 |
|
|
|
Integrated Site Recovery (incl.)
XenServer 6 introduced the Integrated Site Recovery (maintained in 7.0), utilizing the native remote data replication between physical storage arrays and automates the recovery and failback capabilities. The new approach removes the Windows VM requirement for the StorageLink Gateway components and it works now with any iSCSI or Hardware HBA storage repository. You can perform failover, failback and test failover. You can configure and operate it using the Disaster Recovery option in XenCenter. Please note however that Site Recovery does NOT interact with the Storage array, so you will have to e.g. manually break mirror relationships before failing over. You will need to ensure that the virtual disks as well as the pool meta data (containing all the configuration data required to recreate your vims and vApps) are correctly replicated to your secondary site.
|
Site Recovery Manager (Vendor Add-On)
VMware Site Recovery Manager
Perform frequent non-disruptive testing to ensure IT disaster recovery predictability and compliance. Achieve fast and reliable recovery using fully automated workflows and complementary Software-Defined Data Center (SDDC) solutions.
http://www.vmware.com/products/site-recovery-manager.html
|
Site Recovery Manager - (Fee- Based Add-On)
vCenter Site Recovery Manager
Protect all of your virtualized applications with vCenter™ Site Recovery Manager™, a disaster recovery solution that provides automated orchestration and
Features:
- non-disruptive testing of centralized recovery plans
- VM-centric policy-based storage and replication
- Centralized recovery plans
- Self-service, policy-based provisioning
- Automated disaster recovery failover
- Planned migration and disaster avoidance
- Automated failback
- Non-disruptive testing
- Flexible, cost-effective Replication
- DR automation for all virtualized applications
- Support for Third-party array-based replication
- Disaster Recovery to the Cloud Services based on Site Recovery Manager
- VMware vCloud Air Disaster Recovery
https://www.vmware.com/products/site-recovery-manager/features.html
vCenter Site Recovery Manager - Automates site recovery through creation and testing of recovery plans and integrates with 3rd party (array-based) storage replication.
New features in SRM 5.5 are:
- vSphere Replication supports movement of virtual machines by Storage DRS and Storage vMotion on the protected site
- Array-based replication supports movement of virtual machines by Storage DRS and Storage vMotion within a consistency group
- Preserve multiple point-in-time (PIT) images of virtual machines that are protected with vSphere Replication
- Protect virtual machines that reside on VMware vSphere Flash Read Cache storage (vSphere Flash Read Cache is disabled on virtual machines after recovery)
- Protect virtual machines that reside on the vSphere Storage Appliance (VSA) by using vSphere Replication. VSA does not require a Storage Replication Adapter (SRA) to work with SRM 5.5.
SRM v5 enabled host based replication which will manage replication at the virtual machine (VM) level through VMware vCenter Server. It will also enable the use of heterogeneous storage across sites and reduce costs by provisioning lower-priced storage at failover location.
SRM is supported with vCenter Standard and vCenter Foundation but not with the Essentials editions. Vendor Link: http://www.vmware.com/files/pdf/products/SRM/VMware-vCenter-SRM-Datasheet.pdf
|
|
|
|
6 |
|
|
|
Vendor Add-On: CloudPortal Business Manager, CloudStack Usage Server (Fee-Based Add-Ons)
NEW
The workload balancing engine (WLB) was introdcued within XenServer 5.6 FP1 introduced support for simple chargeback and reporting. The chargeback report includes, amongst other data, the following: the name of the VM, and uptime as well as usage for storage, CPU, memory and network reads/writes. You can use the Chargeback Utilization Analysis report to determine what percentage of a resource (such as a physical server) a specific department within your organization used.
see http://bit.ly/2djjiMA
|
vRealize Business (Vendor Add-On)
VMware vRealize Business Enterprise is an IT financial management (ITFM) tool that provides transparency and control over the costs and quality of IT services, enabling the CIO to align IT with the business and to accelerate IT transformation.
http://www.vmware.com/products/vrealize-business.html
|
vCenter Chargeback Manager (Vendor Add-On - discontinued for non Service Providers)
VMware has announced the End of Availability of all versions of VMware® vCenter™ Chargeback Manager™ for non-Service Provider customers, effective of June 10, 2014. There is no change for VMware Service Provider Program (VSPP) Partners. - See more at: http://www.vmware.com/products/vcenter-chargeback#sthash.mnvEXuPD.dpuf
Virtual machine resource consumption data is collected from VMware vCenter Server ensuring complete and accurate tabulation of resource costs. Integration with VMware vCloud Director also enables automated chargeback for private cloud environments.
http://www.vmware.com/products/it-business-management/vcenter-chargeback/overview.html
|
|
|
|
7 |
|
|
Network Extensions
Details
|
|
No
VMware NSX is the network virtualization platform for the Software-Defined Data Center.
http://www.vmware.com/products/nsx.html
NSX embeds networking and security functionality that is typically handled in hardware directly into the hypervisor. The NSX network virtualization platform fundamentally transforms the data center’s network operational model like server virtualization did 10 years ago, and is helping thousands of customers realize the full potential of an SDDC.
With NSX, you can reproduce in software your entire networking environment. NSX provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.
|
yes (IBM 5000v and Cisco Nexus 1000v) - (Fee- Based Add-On)
Crisco 1000v and IBM 5000v
IBM 5000v adds the following features: Manageability - Telnet, SSH, SNMP, TACACS+, RADIUS, Industry Standard CLI
Advanced networking features - L2-L4 ACLs, Static and Dynamic port aggregation, PVLAN, QoS, EVB (IEEE 802.1Qbg)
Network troubleshooting - SPAN, ERSPAN, sFlow, Syslog, VM network statistics
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009685
Software implementation of Cisco Nexus switch, details: http://www.vmware.com/products/cisco-nexus-1000V/overview.html
|
|