Addon
|
|
|
|
|
|
|
custom |
|
|
Unique Feature 1
|
Add-On not supported by this product
|
Add-On not supported by this product
|
Add-On not supported by this product
|
|
|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
|
|
- + - Citrix Hypervisor (former XenServer) it is OpenSource based
- + - It is highly optimized for XenApp and XenDesktop workloads
- + - Leader in virtual delivery of 3D professional graphics
|
|
Cons
|
|
|
- - - Less features in free edition. See https://tinyurl.com/yy2z5y6o
- - - Niche product
|
|
|
|
Content |
|
|
|
Content created by Virtualizationmatrix; (Contributors: Sean Cohen, Yaniv Dary, Raissa Tona, Larry W. Bailey)
Content created by Virtualizationmatrix
Thanks to Sean Cohen, Raissa Tona, Yaniv Dary and Larry W. Bailey for content contribution and review.
|
=AU12
|
The VIRTUALIST (Gica Livada) and Enzo Raso (Citrix)
Content created by Gica Livada from THE VIRTUALISTt: http://www.thevirtualist.org/ with help from Enzo Rasso (Citrix)
|
|
|
|
Assessment |
|
|
|
RHV 3.5 - Click Here For Details
NEW
Red Hat Virtualization is Red Hats server and technical workstation virtualization platform. It consists of the management product RHV-M (manager) and RHV-H (hypervisor) - a purpose built, slim hypervisor based on KVM (Kernel Virtual Machine) - also referred to as Red Hat Virtualization Hypervisor. Red Hat Enterprise Linux (RHEL) also contains KVM based virtualization capabilities, typically referred to as Red Hat Enterprise Linux Hosts. RHV-M can manage RHV-H hosts as well as RHEL hypervisor hosts running KVM (but make sure to check version compatibilities). All listed features apply to both, RHV-H and RHEL host - unless pointed out otherwise.
Red Hat Virtualization is Microsoft SVVP (Server Virtualization Validation Program) Certified and RHV-M can manage Windows guests.
Vendor Messaging:
- Red Hat Virtualization is a complete virtualization management solution for virtualized servers and technical workstations.
- Red Hat Virtualization aims to provide performance advantages, competitive pricing, and a trusted, stable environment.
- Building on Red Hat Virtualization hypervisor and the popular oVirt open virtualization management project, Red Hat Virtualization positions itself as strategic virtualization alternative to proprietary virtualization platforms.
- Red Hat Virtualization provides common underlying services and management technologies for traditional virtualization workloads while also providing an on-ramp to high-level cloud functionality based on OpenStack.
With Red Hat Virtualization, you can:
* Take advantage of existing people skills and investments
* Decrease TCO and accelerate ROI
* Automate time-consuming and complicated manual tasks
* Standardize storage, infrastructure, and networking services on OpenStack (tech-preview)
|
vSphere 6.0 Desktop - Click Here For Overview
This edition covers the vSphere Desktop Edition.
vSphere Desktop Edition is a vSphere Edition designed for licensing vSphere in VDI deployments. vSphere Desktop provides all functionalities of vSphere Enterprise Plus Edition. It can only be used for VDI deployment and can be leveraged with both VMware Horizon View and other third-party VDI connection brokers. (The listed products/features do NOT claim interoperability with all desktop deployments - please check with VMware if in doubt)
vSphere is the collective term for VMwares virtualization platform, it includes the ESX hypervisor as well as the vCenter Management suite and associate components. vSphere is considered by many the industrys most mature and feature rich virtualization platform and had its origins in the initial ESX releases around 2001/2002.
vSphere is available in various edition and bundles:
vSphere Editions (un-bundled)
- Hypervisor (free)
- Standard
- Enterprise
- Enterprise Plus
vSphere with Operations Management (OM) - (vSphere + VMware vCenter Operations Management Suite Standard)
- vSphere with Operations Management Standard Edition
- vSphere with Operations Management Enterprise Edition
- vSphere with Operations Management Enterprise Plus Edition
vSphere with Operations Management Acceleration Kits (AK) - (Convenience bundles that include: six processor licenses for vSphere with Operations Management + vCenter Server Standard license. Six processor licenses of vSphere Data Protection Advanced with the Enterprise and Enterprise Plus Acceleration Kits only)
vSphere with Operations Management Acceleration Kits decompose into their individual kit components after purchase. This allows customers to upgrade and renew SnS for each individual component.
- vSphere with Operations Management Acceleration Kit Standard
- vSphere with Operations Management Acceleration Kit Enterprise
- vSphere with Operations Management Acceleration Kit Enterprise Plus
vSphere Essential Kits - (all-in-one solutions for small environments up to three hosts with two CPUs each that include the vSphere processor licenses, vCenter Server for Essentials (for an environment of up to three hosts with up to 2 CPUs each). Scalability limits for the Essentials Kits are product enforced and cannot be extended other than by upgrading the whole kit to an Acceleration Kit. vSphere Essentials and Essentials Plus Kits are self-contained solutions and may not be decoupled, or combined with other vSphere editions.
- Essentials
- Essentials Plus
vSphere Remote Office Branch Office Editions - licensing is priced in packs of 25 VMs (Virtual Machines). Editions can be used in conjunction with an existing or separately purchased vCenter Server edition. SnS is required for at least one year. The 25 VM pack can be distributed across multiple sites. A maximum of a single 25 VM pack can be used in a single remote location or branch office.
- VMware vSphere Remote Office Branch Office Standard
- VMware vSphere Remote Office Branch Office Advanced
VMware vSphere Desktop
vSphere Desktop Edition is a vSphere Edition designed for licensing vSphere in VDI deployments. It can only be used for VDI deployment and can be leveraged with both VMware Horizon View and other third-party VDI connection brokers (e.g. Citrix XenDesktop)
VMwares cloud offerings (vCloud Suite) and VDI (VMware Horizon) offerings are separate (fee-based) products.
|
Citrix Hypervisor 8 Standard Edition includes Citrix support. It extends alignment with XenApp and XenDesktop release cycles while the expanded XenServer Enterprise entitlement ensures you can leverage unique app and desktop performance integrations only available with XenServer. For Standard vs Enterprise edition, please check: https://tinyurl.com/yy2z5y6o
NEW
Citrix released Citrix Hypervisor 8 on April 125th 2019, this release added new features:
- online LUN resize for GFS2 SR (Premium Edition)
- support for guest OS disks larger than two terabytes (Premium Edition)
- increased number of Guest OS supported
- support for disk and memory snapshots for vGPU-enabled VMs (Premium Edition)
- Web-based help for XenCenter and Citrix Hypervisor Conversion Manager
- Experimental Guest UEFI boot
Citrix entered the Hypervisor market with the acquisition of XenSource - the main supporter of the open source Xen project - in Oct 2007. The Xen project continues to exist, see https://www.xenproject.org/
|
|
|
RHV 3.5 released in Feb 2015 - (RHV 3.4 - June 2014, RHV 3.3 - January 2014, RHV 3.2 - June 2013, RHV 3.1 - Dec 2012, RHV 2.1 - Nov 2009, 2.2: Aug 2010, RHV 3.0: Jan 2012)
NEW
Qumranet, a technology startup began the development of KVM and was subsequently acquired by Red Hat in 2008. Red Hats first release was v 2.1 in 2009 with an update in 2010 which included the virtual technical workstation capabilities. The 3.0 release in 2012 was a major milestone in porting the RHV-M manager from .NET to Java (and fully open-sourcing) alongside other improvements.
RHV 3.1 removed all requirements of any Windows-based infrastructure, but still support Microsoft Active Directory for user authentication, administration, VDI, etc. RHV 3.1 also provided a RESTful API for integration.
Since RHV 3.2, Red Hat has provided many feature enhancements, improvements in scale, enhanced reliability and integration points (hooks, OpenStack etc) documented in the matrix.
While not as large as e.g. the VMware and Microsoft ecosystem there are a large number of ISVs with products currently integrated into RHV, including Acronis for Backup/DR/Migration, Symantec Storage Foundation and Tintri for Storage, BMC TrueSight for capacity optimization, Igel, Wyse and other thin client vendors have SPICE (RHVs remote viewing protocol) integrated in some of their product offerings. Major OEM hardware vendors like IBM and HP support RHV (e.g. IBM has part numbers for all of RHVs subscription offerings). Additionally RHV enables third party plug-ins Recent plug-ins include:
- BMC connector for RHV-M REST API to collect data for managing RHV boxes without having to install an agent.
- HP OneView for Red Hat Virtualization (OVRHV) UI plug-in that that allows you to seamlessly manage your HP ProLiant infrastructure from within RHV Manager and provides actionable, valuable insight on underlying HP hardware (HP Insight Control plug-in is also available).
- NetApp Virtual Console enables rapid cloning of NetApp virtual machines, and allows NFS storage discovery, provisioning and modification from within Red Hat Virtualization Manager.
- Symantec Storage Foundation that delivers storage Quality of Service (QoS) at the application level and maximizes your storage efficiency, availability and performance across operating systems. This includes Symantec Cluster Server provides automated disaster recovery functionality to keep applications up and running. Cluster Server enables application specific fail-over and significantly reduces recovery time by eliminating the need to restart applications in case of a failure.
- Tenable Network Securitys Nessus Audit for RHV-M which queries the RHV API and reports that information within a Nessus report.
- Ansible RHV module that allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the RHV platform.
Visit marketplace.redhat.com/RHV to learn more about these plug-ins and RHV’s ecosystem of partner
|
=AU14
|
Citrix Hypervisor 8 Release date April 2019
Xens first public release was in 2003, becoming part of Novell SUSE 10 OS in 2005 (later also Red hat). In Oct 2007 Citrix acquired XenSource (the main maintainer of the Xen code) and released XenServer under the Citrix brand.
For more info check https://tinyurl.com/or2f389
|
|
|
|
Pricing |
|
|
|
Yes (included in RHV hypervisor subscription)
The RHV-M management component is included in the RHV subscription model (i.e. single part number for both, hypervisor and management)
|
Desktop :
$65 / Powered on Desktop vm
vSphere Desktop Edition is a vSphere Edition designed for licensing vSphere in VDI deployments. It can only be used for VDI deployment and can be leveraged with both VMware Horizon View and other third-party VDI connection brokers (e.g. Citrix XenDesktop)
It is only available in a pack size of 100 desktop VMs at a license list price of $6,500.00 ($65/powered on vm)
vSphere Desktop is licensed based on the total number of Powered On Desktop Virtual Machines.
vSphere Desktop Edition FAQ here: http://www.vmware.com/products/vsphere/vsphere-desktop.html
Licensing for the Horizon View family here: http://www.vmware.com/files/pdf/view/VMware-View-Pricing-Licensing-and-Upgrading-white-paper.pdf
|
Free or two commercial editions, pricing for Standard Edition is: $763 / socket, Premium: $1525/ socket - both including 1 year of software maintenance.
The Express Edition provides a reduced set of features and is not eligible for Citrix Support and Maintenance. Express Edition does not require a license. Hosts that are running the Express Edition of Citrix Hypervisor are labeled as “Unlicensed” in XenCenter.
Express Edition was called Free Edition in previous releases of Citrix Hypervisor. The name of the edition has changed to align with other Citrix products, but the features and capabilities provided by the edition have not changed.
For commercial editions (Standard or Enterprise) XenServer is licensed on a per-CPU socket basis. For a pool to be considered licensed, all XenServer hosts in the pool must be licensed. XenServer only counts populated CPU sockets.
All customers who have purchased any edition of XenApp or XenDesktop have an entitlement to use all Citrix Hypervisor features
From version 6.2.0 onwards, XenServer (other than via the XenDesktop licenses) is licensed on a per-socket basis. Allocation of licenses is managed centrally and enforced by a standalone Citrix License Server, physical or virtual, in the environment. After applying a per-socket license, XenServer will display as Citrix XenServer Per-Socket Edition.
|
|
|
Yes, combined RHEL and RHV offering 26% savings
Red Hat offers Red Hat Enterprise Linux with Smart Virtualization (a combined solution of Red Hat Enterprise Linux and Red Hat Virtualization) that offers a 26% savings over buying each product separately. Red Hat Enterprise Linux with Smart Virtualization is the ideal platform for virtualized Linux workloads. Red Hat Enterprise Linux with Smart Virtualization enables organization to virtualize mission-critical
applications while delivering unparalleled performance, scalability, and security features. See details here: http://red.ht/KBgLO5
|
=AU17
|
Free (XenCenter)
Citrix XenCenter is the Windows-native graphical user interface for managing Citrix Hypervisor. It is included for no additional cost (open source as well as commercial versions).
|
|
Bundle/Kit Pricing
Details
|
RHV: No (RHEL: 1/4/unlimited)
RHV subscriptions include the RHV-H hypervisor and RHV-Manager, they do not include the rights to use RHEL as the guest operating system in the virtual machines being managed by RHV.
The customer would purchase this separately by buying a RHEL for Virtual Datacenter subscription.
Please note that RHEL hosts generate additional subscription costs that are not included with RHV (see https://www.redhat.com/apps/store/server/ for details). RHEL hosts are priced by sockets (2, 4), number of virtual guests included (1, 4, unlimited) and subscription levels (Standard/Premium).
Or, the customer can buy the Red Hat Enterprise Linux with Smart Virtualization which includes both RHV and RHEL for use as the guest operating System. http://red.ht/1lT1fww
|
Horizon View Bundle, Horizon View Add-on to Bundle Upgrade
vSphere Desktop can be purchased stand-alone or via the Horizon View Bundle ($250 per concurrent connection) or the Horizon View Add-on to Bundle Upgrade ($70 per concurrent connection).
Details here: http://www.vmware.com/files/pdf/view/VMware-View-Pricing-Licensing-and-Upgrading-white-paper.pdf
|
Yes
Citrix Hypervisor is included for free with all XenApp and XenDesktop editions media kit
|
|
Guest OS Licensing
Details
|
Yes (RHV-M)
RHV-M - the Red Hat Virtualization Manager with a web-driven UI is central management console.
RHV 3.5 is based entirely on Open Source Software, with no dependencies on proprietary server infrastructures or web browsers. A centralized management system with a search-driven graphical interface supporting up to hundreds of hosts and thousands of virtual machines. The fully featured enterprise management system enables customers to centrally manage their entire virtual environments, which include virtual datacenters, clusters, hosts, guest virtual servers and technical workstations, networking, and storage.
RHV-M is also localized in various languages including: French, German, Japanese, Simplified Chinese, Spanish and English.
From RHV 3.3 there is full support for integrated external applications plug-ins from HP, Symantec, and NetApp – Learn more through the RHV marketplace - http://marketplace.redhat.com/RHV
|
=AU19
|
No
A demo Linux VM is include, there are no guest OS licenses included with the XenServer license. Guest OSs need to be therefore licensed separately.
|
|
|
VM Mobility and HA
|
|
|
|
|
|
|
VM Mobility |
|
|
Live Migration of VMs
Details
|
Yes (except with CPU pass-through)
Each cluster is configured with a minimal CPU type that all hosts in that cluster must support (you specify the CPU type in the RHV-M GUI when creating the cluster). Guests running on hosts within the cluster all run on this CPU type, ensuring that every guest can be live migrated to any host within the cluster. This cannot be changed after creation without significant disruption. All hosts in a cluster must run the same CPU type (Intel or AMD).
|
=AU26
|
Yes XenMotion
XenMotion is available in all versions of XenServer and allows you to move a running VM from one host to another host, when the VMs disks are located on storage shared by both hosts. This allows for pool maintenance features such as Workload Balancing (WLB), High Availability (HA), and Rolling Pool Upgrade (RPU) to automatically move VMs. These features allow for workload leveling, infrastructure resilience, and the upgrade of server software, without any VM downtime. Storage can only be shared between hosts in the same pool, as a result, VMs can only be moved within the same pool.
|
|
Migration Compatibility
Details
|
Yes
|
=AU27
|
Yes (Heterogeneous Pools)
XenServer 5.6 introduced Heterogeneous Pools which enables live migration across different CPU types of the same vendor (requires AMD Extended Migration or Intel Flex Migration), details here: https://tinyurl.com/y44b3nxc.
This capability is maintained in Citrix Hypervisor
|
|
|
Yes (LB) - Built-in (CPU) and Scheduler for custom
Yes - a policy engine determines the specific host on which a virtual machine runs. The policy engine decides which server will host the next virtual machine based on whether load balancing criteria have been defined, and which policy is being used for that cluster. RHV-M will use live migration to move virtual machines around the cluster as required.
From RHV-M 3.3 a new scheduler handles virtual machine placement, allowing users to create new scheduling policies, and also write their own logic in Python and include it in a policy.
- The new scheduler serves scheduling requests for running or migrating virtual machines according to a policy.
- The scheduling policy also includes load balancing functionality.
- Scheduling is performed by applying hard constraints and soft constraints to get the optimal host for that request at a given point of time
- The infrastructure allowing users to extend the new scheduler, is based on a service called ovirt-scheduler-proxy. The services purpose is for RHV admins to extend the scheduling process with custom python filters, weight functions and load balancing modules.
- Every cluster has a scheduling policy. Prior to 3.3, there were only three main policies - None, Even distribution, and Power saving - and now administrators can create their own policies or use the built-in policies which were extended to support new capabilities such as shutting down servers for power saving policy.
The load balancing process runs once every minute for each cluster in a data center. You can disable automatic migration for individual vm or pin them to specific hosts.
You can choose to set the policy as either even distribution or power saving, but NOT both.
|
=AU28
|
Yes
Enabled through XenCenter
|
|
Automated Live Migration
Details
|
Yes (Power Saving)
When Power saving is enabled in a cluster it distributes the load in a way that consolidates virtual machines on a subset of available hosts. This enables surplus hosts that are not in use to be powered down, saving power. You can set the threasholds in the RHV-M GUI to specify the Minimum Service Level a host is permitted to have.
You must also specify the time interval in minutes that a host is permitted to run below the minimum service level before remaining virtual machines are migrated to other hosts in the cluster - as long as the maximum service level set also permits this.
LIMITATIONS:
- You can enable either Load Balancing or Power Savings on a cluster but not both concurrently.
- Currently only CPU utilization is considered.
|
=AU29
|
No, Workload Balancing
Workload Balancing (WLB) which was reintroduced in 6.5 is enhanced with XenServer 7.x. Available in Enterprise edition.
|
|
|
Yes
NEW
Storage Live Migration is officially supported since RHV 3.2 that allowed migration of virtual machine disks to different storage devices while the virtual machine is running.
RHV 3.3 added the ability to do it to several disks of the same vm concurrently.
RHV 3.4 added the ability to do cold move between any two storage domains in the same DC (e.g., NFS and iSCSI) and Storage Live Migration between types of file domains (NFS/POSIX/Gluster) or block domains (FCP/iSCSI).
RHV 3.5 added the option to move an entire storage domain between DCs or even between setups.
|
=AU30
|
No Workload Balancing
Power management is part of XenServer Workload Balancing (WLB) available in Enterprise edition.
Background: Power Management introduced with 5.6 was able to automatically consolidate workloads on the lowest possible number of physical servers and power off unused hosts when their capacity is not needed. Hosts would be automatically powered back on when needed.
|
|
Storage Migration
Details
|
200 hosts/cluster
That is the supported maximum number of hosts per RHV DataCenter and also per cluster (the theoretical KVM limit is higher).
|
=AU31
|
Yes (Storage Live Migration)
Storage Live Migration in Citrix Hypervisor works with the VM in any power state (stopped, paused or running)
Live migration and storage live migration are subject to the following limitations and caveats:
- VMs using PCI pass-through cannot be migrated.
- VM performance is reduced during migration.
- For storage live migration, pools protected by high availability, disable high availability before attempting VM migration.
- Time to completion of VM migration depends on the memory footprint of the VM and its activity. In addition, the size of the VDI and its storage activity affects VMs being migrated with storage live migration.
- IPv6 Linux VMs require a Linux Kernel greater than 3.0.
More info: https://tinyurl.com/yyq4txuv
|
|
|
|
HA/DR |
|
|
|
Yes (HA)
High availability is an integrated feature of RHV and allows for virtual machines to be restarted in case of a host failure.
HA has to be enabled on a virtual machine level. You can specify levels of priority for the vm (e.g. if resources are restrained only high priority vm are being restarted). Hosts that run highly available vm have to be configured for power management (to ensure accurate fencing in case of host failure).
Fencing Details: When a host becomes non-responsive it potentially retains the lock on the virtual disk images for virtual machines it is running. Attempting to start a virtual machine on a second host could cause data corruption. Fencing allows RHV-M to safely release the lock (using a fence agent that communicates with the power management card of the host) to confirm that a problem host has truly been rebooted.
RHV-M gives a non-responsive host a grace period of 30 seconds before any action is taken in order to allow the host to recover from any temporary errors.
Note: The RHV-M manager needs to be running for HA to function (unlike e.g. VMware HA or Hyper-V HA that do not rely on vCenter / VMM for the failover capability), also HA can not be enabled on the cluster level.
|
=AU32
|
64 hosts / resource pool
NEW
64 Hosts per Resource Pool.
Please see Citrix Hypervisor Configuration Limits document https://tinyurl.com/y5o67fdj
Note: the maximum pool size, for Express edition is now restricted to 3 Hosts.
|
|
Integrated HA (Restart vm)
Details
|
Yes (HA, WatchDog)
RHV supports watchdog device for linux guests that restarts virtual machines in case of OS failure. High availability (in addition to monitoring physical hosts) also monitors all virtual machines, so if the virtual machines operating system crashes, a signal is sent to automatically restart the virtual machine, but this is with host change.
|
=AU33
|
Yes
Citrix Hypervisor High Availability protects the resource pool against host failures by restarting virtual machine. HA allows for configuration of restart priority and failover capacity. Configuration of HA is simple (effort similar to enabling VMware HA).
|
|
Automatic VM Reset
Details
|
No
There is no live lock-step mirroring support in RHV (aka Fault Tolerance in VMware) - although the theoretical capability is available in KVM. Red Hat tends to points out that the limitations around this technology (inability to take e.g. snap shots, perform a live storage migrate, limited guest vCPU support, high bandwidth/processing requirements) can make it inappropriate for enterprise implementation).
|
=AU34
|
No
There is no automated restart/reset of individual virtual machines e.g. to protect against OS failure
|
|
VM Lockstep Protection
Details
|
No (native);
Yes (with Vendor Add-On: Satellite
There is no integrated application level monitoring or restart of services/vm in case of application failures.
You can e.g. use Symantec Cluster Server to mitigate this.
This is possible using Satellite 6.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
=AU35
|
No
While Citrix Hypervisor can perform VM restart in case of a host failure there is no integrated mechanism to provide zero downtime failover functionality.
|
|
Application/Service HA
Details
|
No (yes with Symantec Storage Foundation - Fee-Based Add-On)
There is no natively provided Site Failover capability in RHV.
You can e.g. use Symantec Storage Foundation to mitigate this.
Since RHV 3.2 Red Hat supports a third-party plug-in framework, enabling third parties to integrate new features and actions directly into the RHV Manager user interface.
RHV-M 3.5 fully supports a Symantec Storage Foundation plug-in that provides disaster recovery in case of a failure.
Symantec Storage Foundation offers push button disaster recovery orchestration and features completely automated failover of a Red Hat Virtualization environment over to a disaster recovery site. This includes guests, guest network reconfiguration, and storage reconfiguration. It is designed to provide high availability and disaster recovery for databases, custom applications, and complete multi-tiered applications across physical and virtual environments over any distance.
|
=AU36
|
No
There is no application monitoring/restart capability provided with Citrix Hypervisor
|
|
Replication / Site Failover
Details
|
Yes
In RHV-H you can upgrade and reinstall a Red Hat Virtualization Hypervisor host from an ISO image centrally stored on the Red Hat Virtualization Manager. Upgrading and reinstalling means that you are stopping and restarting the host. This can be either done manually by entering maintenance mode for the individual hosts or using a automated process of scheduling and applying updates to the cluster by setting hosts to maintenance mode in round-robin fashion (like e.g. with VMware Update Manager).
In RHEL Hypervisor hosts Patching is done via the normal patching processes via RHN or Satellite. Reboot can either be done manually or scheduled in round-robin fashion.
|
=AU37
|
Integrated Disaster Recovery
NEW
Citrix Hypervisor DR works by storing all the information needed to recover your business-critical VMs and vApps on storage repositories (SRs) that are then replicated from your primary (production) environment to a backup environment. When a protected pool at your primary site goes down, the VMs and vApps in that pool can be recovered from the replicated storage and recreated on a secondary (DR) site, with minimal application or user downtime.
Note: Citrix Hypervisor DR can only be used with LVM over HBA or LVM over iSCSI storage types.
Citrix Hypervisor VMs consists of two components:
- Virtual disks that are being used by the VM, stored on configured storage repositories (SRs) in the pool where the VMs are located.
- Metadata describing the VM environment. This is all the information needed to recreate the VM if the original VM is unavailable or corrupted. Most metadata configuration data is written when the VM is created and is updated only when you make changes to the VM configuration. For VMs in a pool, a copy of this metadata is stored on every server in the pool.
In a DR environment, VMs are recreated on a secondary (DR) site from the pool metadata - configuration information about all the VMs and vApps in the pool. The metadata for each VM includes its name, description and Universal Unique Identifier (UUID), and its memory, virtual CPU and networking and storage configuration. It also includes the VM’s startup options - start order, delay interval and HA restart priority - which are used when restarting the VM in an HA or DR environment. More info: https://tinyurl.com/y5skhhu4
|
|
|
Management
|
|
|
|
|
|
|
General |
|
|
Central Management
Details
|
Third-party plug-in framework
NEW
RHV focuses on managing the virtual infrastructure and can also manage RHSS.
Also, RHV-M 3.3 (and onwards) integrates with 3rd party applications including:
- BMC connector for RHV-M REST API to collect data for managing RHV boxes without having to install an agent.
- HP OneView for Red Hat Virtualization (OVRHV) UI plug-in that that allows you to seamlessly manage your HP ProLiant infrastructure from within RHV Manager and provides actionable, valuable insight on underlying HP hardware (HP Insight Control plug-in is also available).
- NetApp Virtual Console enables rapid cloning of NetApp virtual machines, and allows NFS storage discovery, provisioning and modification from within Red Hat Virtualization Manager.
- Symantec Storage Foundation that delivers storage Quality of Service (QoS) at the application level and maximizes your storage efficiency, availability and performance across operating systems. This includes Symantec Cluster Server provides automated disaster recovery
functionality to keep applications up and running. Cluster Server
enables application specific fail-over and significantly reduces
recovery time by eliminating the need to restart applications in case of
a failure.
- Tenable Network Securitys Nessus Audit for RHV-M which queries the RHV API and reports that information within a Nessus report.
- Ansible RHV module that allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the RHV platform.
|
=AU20
|
Yes (XenCenter)
XenCenter is the central Windows-based multi-server management application (client) for XenServer hosts (including the open source version).
It is different to other management apps (e.g. SCVMM or vCenter) which typically have a management server/management client architecture where the management server holds centralized configuration information (typically in a 3rd party DB). Unlike these management consoles, XenCenter distributes management data across XenServer servers (hosts) in a resource pool to ensure there is no single point of management failure. If a management server (host) should fail, any other server in the pool can take over the management role. XenCenter is essentially only the client.
|
|
Virtual and Physical
Details
|
Yes (AD and IPA)
RHV offers the choice to either integrate with Microsoft Active Directory or use Red Hat IPA (based upon open technologies and standards) with support for LDAP and Kerberos, centrally managed identity, single sign-on services, high availability directory services or Red Hat Directory Services (RHDS).
RHV provides a range of pre-configured or default roles, from the Superuser or system administration of the platform, to an end user with permissions to access a single virtual machine only. Additional roles can be added and customized to suit the end user environment.
|
=AU21
|
Limited (plug-ins)
XenCenter focusses on managing the virtual infrastructure (XenServer host and the associated virtual machines).
XenServer 7.0 saw the introduced of System Center Operations Manager (SCOM) integration pack (via a Comtrade acquisition). This enabling the management and monitoring of XenServer hosts and VMs and is available for no additional charge for XenServer 7.x Enterprise Edition users (see https://tinyurl.com/y5f985vg for details)
|
|
RBAC / AD-Integration
Details
|
No (native)
Yes (with Vendor Add-On: CloudForms)
No - RHV exclusively manages Red Hat based environments.
Yes with Red Hat CloudForms - users can manage multiple hypervisor vendors. Details here: http://red.ht/I8JG3E (additional cost, not included in RHV subscription)
|
=AU22
|
Yes (hosts/XenCenter)
XenServer 5.6 introduced Role Based Access Control by allowing the mapping of a user (or a group of users) to defined roles (a named set of permissions), which in turn have the ability to perform certain operations. RBAC depends on Active Directory for authentication services. Specifically, XenServer keeps a list of authorized users based on Active Directory user and group accounts. As a result, you must join the pool to the domain and add Active Directory accounts before you can assign roles.
There are 6 default roles: Pool Admin, Pool Operator, VM Power Admin, VM Admin, VM Operator and Read Only - which can be listed and modified using the xe CLI.
Details here: https://tinyurl.com/y5wrct5b
|
|
Cross-Vendor Mgmt
Details
|
Yes (RHV-M, Power User Portal)
Yes, RHV-M is Java based and is accessed through a web browser GUI, RESTful API with session support, Linux CLI, Python SDK, Java SDK.
RHV also offers a Power User Portal, a web-based access portal for user (Red Hat positions it as an entry-level Infrastructure as a Service (IaaS) user portal to: Create, edit and remove virtual machines, Manage virtual disks and network interfaces, Assign user permissions to virtual machines, Create and use templates to rapidly deploy virtual machines, Monitor resource usage and high-severity events, Create and use snapshots to restore virtual machines to a previous state.
|
=AU23
|
No (native)
Yes (Vendor Add-On: CloudPlatform)
XenCenter only manages Citrix XenServer hosts.
Comments:
- Citrixs Desktop Virtualization product (XenDesktop, fee based add-on) supports multiple hypervisors (ESX, XenServer, Hyper-V)
- The Citrix XenServer Conversion Manager in 6.1 now enables batch import of VMs created with VMware products into a XenServer pool to reduce costs of converting to a XenServer environment).
|
|
Browser Based Mgmt
Details
|
Yes (extended functionality with CloudForms - Fee-Based Add On)
RHV has comprehensive data warehouse with a stable API and reports package that provides a suite of pre-configured reports that enable you to monitor and analyse the system at data center, cluster and host levels.
It also provides dashboards in the UI to monitor the system in these different levels.
CloudForms - offers cloud and virtualization operations management advance capabilities:
Note that CloudForms is an additional Fee-Based offering not covered by the RHV subscription!
Features of the cloud and virtualization operations management capabilities:
- delivering IaaS with self-service
- service catalogs, automated provisioning and life cycle management
- monitoring and optimization of infrastructure resources and workloads
- metering, resource quotas, and chargeback
- proactive management, advanced decision support, and intelligent automation through predictive analytics
- provides visibility and reporting for governance, compliance, and management insight
- Enforces enterprise policies in real-time, ensuring cloud security, reliability, and availability
- IT process, task, and event automation.
Details here: www.redhat.com/en/technologies/cloud-computing/cloudforms
|
=AU24
|
Not directly with XenCenter but possible with XenOrchestra (Open Source)
More info: https://bit.ly/1gcmTYN
Web Self Service (retired with the launch of XenServer 6.2) was a lightweight portal which allowed individuals to operate their own virtual machines without having administrator credentials to the XenServer host. For large infrastructures, OpenStack is a full orchestration product with far greater capability; for a lightweight alternative, xvpsource.org offers a free open source product.
Related Citrix products have browser based access, for example Storefront (next generation of Web Interface)
XenOrchestra is a web interface to visualize and administrate your XenServer (or XAPI enabled) hosts. No agent is requiered to make it work. It aims to be easy to use on any device supporting modern web technologies (HTML 5, CSS 3, JavaScript) such as your desktop computer or your smartphone. More info: https://tinyurl.com/ya7d72q6
|
|
Adv. Operation Management
Details
|
Yes (Live Migration - unlimited concurrent migrations, 3 by default)
Yes, live migration is fully supported with unlimited concurrent migrations (depending only on available resources on other hosts and network speed). By default limited to 3 concurrent outgoing migrations and each live migration event is limited to a maximum transfer speed of 32 MiBps.
|
=AU25
|
Not included but available via Citrix Director
There is no advanced operations management tool included with XenServer.
Additional Info:
XenServers Integration Suite Supplemental Pack allows inter-operation with Systems Center Operations Manager (SCOM). SCOM enables monitoring of host performance when installed on a XenServer host.
Both of these tools can be integrated with your XenServer pool by installing the Integration Suite Supplemental Pack on each of your XenServer hosts.
For virtual desktop environments Citrix EdgeSight is a performance and availability management solution for XenDesktop, XenApp and endpoint systems. EdgeSight monitors applications, devices, sessions, license usage, and the network in real time, allowing users to quickly analyze, resolve, and proactively prevent problems.
EdgeSight is actually discontinued and replaced by Citrix Director https://tinyurl.com/y3axdt9p
|
|
|
|
Updates and Backup |
|
|
Hypervisor Upgrades
Details
|
Yes (Red Hat Network)
Updates to the virtual machines are typically performed as in the physical environment, providing only limited capabilities. For Red Hat virtual machines updates can be downloaded from the Red Hat Network. For Windows virtual machines you would apply the relevant MS update mechanisms. There is no specific integrated function in RHV-M to update virtual machines or templates.
Centralized patching mechanism for Red Hat machines is possible via Satellite.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
=AU38
|
The XenCenter released with XenServer 7.0 allows updates to be applied to all versions of XenServer (commercial and free)
With XenServer 7.0 patching and updating via the XenCenter management console (enabling automated GUI driven patch application and upgrades) is supported with the comercial and free XenServer versions.
XenServer 6 introduced the Rolling Pool Upgrade Wizard. The Wizard simplifies upgrades to XenServer 6.x by performing pre-checks with a step-by-step process that blocks unsupported upgrades. You can choose between automated or semi-automated, where automated can use a network based XenServer update repository (which you have to create) while semi-automated requires local install media. There are still manual steps required for both approaches and no scheduling functionality or online repository is integrated.
Indroduced in Xenserver 7.1, live patching enables customers to install some Linux kernel and Xen hypervisor hotfixes without having to reboot the hosts. Such hotfixes will consist of both a live patch, which will be applied to the memory of the host, as well as a hotfix that updates the files on disk. This reduces maintenance costs and downtime. When applying an update using XenCenter, the Install Update wizard checks whether the hosts need to be rebooted after the update is applied and displays the result on the Prechecks page. This enables customers to know the post-update tasks well in advance and schedule the application of hotfixes accordingly.
XenServer Live Patching is available for XenServer Enterprise Edition customers, or those who have access to XenServer through their XenApp/XenDesktop entitlement.
|
|
|
Yes; Including RAM
RHV 3.1 introduced the new ability to take Live Snapshots of VMs and has an enhanced Snapshot management interface (maintained with 3.3).
New in RHV 3.3: Snapshots now include the state of a virtual machines memory as well as disk. In addition a new Create Snapshot button has been added to the action panel of the Virtual Machines tab, and as a context menu item when a virtual machine is selected.
New in RHV 3.4: The admin portal now allows to preview, commit and manage snapshots of individual disks within a VM.
New in RHV 3.5: Ability to Live Merge snapshots, allowing to remove intermediate snapshots while VM is running
Taking a snapshot in RHV involves the following possible actions:
- Creation (obviously)
- Previews (involves previewing a snapshot to determine whether or not to restore the system to this snapshot - when it is previewed, a new (COW) preview layer is copied from the snapshot being previewed. The guest interacts with the preview instead of the actual snapshot volume. The preview can then be committed to restore the
guest data to the state captured in the snapshot or alternatively the admin can select Undo to discard the preview layer of the viewed snapshot.
- Deletion (deleting a restoration point that is no longer required)
Notes for Live Snapshots:
- VM with a guest agent that supports quiescing can ensure filesystem consistency across live snapshots. Red Hat Network registered Red Hat Enterprise Linux guests can install the qemu-guest agent to enable quiescing before snapshots (VDSM uses libvirt to communicate with the agent to prepare for a snapshot).
- All live snapshots are attempted with quiescing enabled. If the snapshot command fails because there is no compatible guest agent present, the live snapshot is re-initiated without the use-quiescing flag.
Please note: When a virtual machine is reverted with quiesced filesystems, it boots cleanly with no filesystem check required.
Reverting the previous snapshot using an un-quiesced filesystem however requires a filesystem check on boot.
|
=AU39
|
No
There is no integrated update engine for guest OS of the virtual machines within Standard Edition of Citrix Hypervisor
|
|
|
Backup API
From RHV 3.3 there is a API set for third-party tools that offer backup, restore, and replication.
For backup, a snapshot of a virtual machines disk is created then attached to a virtual appliance. For restore, disks are attached to a virtual appliance, the data is restored to the disks, then the disks are attached to a virtual machine.
Tools are from enterprise companies such as Tintri, Acronis, Symantec, etc. From RHV 3.3 there is configuration support for add/edit/delete storage connections to enable multi-pathing, hardware changes, simpler failover to remote sites, and array-based replication.
RHV-M 3.5 fully supports Acronis Backup & Recovery that provides image-based backup, disaster recovery (recover the whole VM) and data protection (recover selected files).
This is a Fee-based Add-On; Details - http://marketplace.redhat.com/RHV/10085-Acronis-Backup-Recovery-11-5-Virtual-Edition-for-RHV-11-5
|
=AU40
|
Yes
You can take regular snapshots, application consistent point-in-time snapshots (requires Microsoft VSS) and snapshots with memory of a running or suspended VM. All snapshot activities (take/delete/revert) can be done while VM is online.
|
|
Backup Integration API
Details
|
No
There is no integrated backup in RHV Manager.
Use traditional backup methods to protect your virtual machines and hosts (e.g. backup agents for network based backups in individual virtual machines and RHEL hosts). However you can of course export virtual machines and templates onto the export domain (NFS).
Background: Export domains are temporary storage repositories that are used to copy and move images between data centers and RHV environments. They can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time. Support for export storage domains backed by storage on anything other than NFS is being deprecated - new export storage domains must be created on NFS storage.
Additionally a tool for backing up and restoring RHV Manager database and certificates is available.
|
=AU41
|
Limited
In Standard edition there is no specific backup framework for integration of 3rd party backup solutions as such; however the Citrix Hypervisor API allows for scripting (e.g. utilizing Snapshots) and basic integration with 3rd party backup products and scripting e.g. PHD backup
|
|
Integrated Backup
Details
|
No (native);
Yes (with Vendor Add-On: Satellite 6)
RHV-H or RHEL hosts can be installed using traditional methods either interactively (from ISO, USB flash media) or automated (PXE). There is however no integrated capability to deploy RHV centrally (e.g. using deployment templates and custom images) to bare metal hosts using the RHV management (like VMwares auto deploy or SCVMMs bare metal deploy).
This is possible using Satellite 6 using Foreman.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
=AU42
|
No (Retired)
Citrix retired the Virtual Machine Protection and Recovery (VMPR) in XenServer 6.2. VMPR was the method of backing up snapshots as Virtual Appliances.
Alternative backup products are available from Quadric Software, SEP, and Unitrends
Background:
With XenServer 6, VM Protection and Recovery (VMPR) became available for Advanced, Enterprise and Platinum Edition customers.
XenServer 5.6 SP1 introduced a basic XenCenter-based backup and restore facility, the VM Protection and Recovery (VMPR) which provides a simple backup and restore utility for your
critical vims. Regular scheduled snapshots are taken automatically and can be used to restore VMs in case of disaster. Scheduled snapshots can also be automatically archived to a remote CIFS or NFS share, providing an additional level of security.
Additionally the XenServer API allows for scripting of snapshots. You can also (manually or script) export and import virtual machines for backup purposes.
|
|
|
|
Deployment |
|
|
Automated Host Deployments
Details
|
Yes, CloudInit and Glance support - NEW
NEW
In RHV by default, virtual machines created from templates use thin provisioning.
In the context of templates, thin provisioning of vm means copy on write (aka linked clone or difference disk) rather than a growing file system that only takes up the storage space that it actually uses (usually referred to as thin provisioning). All virtual machines based on a given template share the same base image as the template and must remain on the same data domain as the template. Do NOT delete the template if you have created virtual machines in this way from it as the vm depend on the existence of the base image!
You can however specify to deploy the vm from template as clone - which means that a full copy of the vm will be deployed. When selecting to clone you can then select thin (sparse) or pre-allocated provisioning of the full clone. Deploying from template as clone results in independence from the base image but space savings associated with using copy on write approaches are lost.
Important: The image used to create templates for RHV must be generalized e.g. using the Sysprep tool (for Windows) or sys-unconfig (for Linux). This approach is similar to the deployment with e.g. SCVMM (different to VMware, where the generalization and customization of the vm is done on deployment, allowing for a quick, direct conversion of a template to a vm and vice versa). With RHV, changes to the template image require a deployment, applying changes/updates with subsequent generalization, then a conversion back into a template.
While not the same as a classic template, RHV 3.3 introduced support for cloud-init, which facilitates the provisioning of virtual machines by performing the initial setup of networking, SSH keys, timezones, user data injection, and more.
In RHV 3.5 support was added consume (use), export and share images with Glance. Glance image services include discovering, registering, and retrieving virtual machine images.
|
=AU43
|
Yes, via UEFI boot
NEW
Citrix Hypervisor supports booting hosts using the UEFI mode. UEFI mode provides a rich set of standardized facilities to the bootloader and operating systems. This feature allows Citrix Hypervisor to be more easily installed on hosts where UEFI is the default boot mode.
|
|
|
No (native);
Yes (with Vendor Add-On: CloudForms Deployables)
There is no integrated functionality in RHV that allows you to deploy a multi-vm construct from a single template (like e.g. with vSphere vApp or VMM Service Templates).
Comment: CloudForms (fee based cloud offering) has components called Deployable - defined as an Application deployment definition that contains one or more assemblies and meta-data configuration; this configuration specializes a deployment by qualifying it for a specific targeted infrastructure. A Deployable is essentially made of one or more Assemblies with configuration data. Each Assembly is made of one or more Templates and configuration. Each Template lists the software and configuration data. An Assembly optionally indicates the services that it defines or requires. Some of the configuration data may be defined a the time of instance launch.
See http://red.ht/122cnOE for details.
|
=AU44
|
Yes
Templates in Citrix Hypervisor are either the included pre-configured empty virtual machines which require an OS install (a shell with appropriate settings for the guest OS) or you can convert an installed (and e.g. sysprep-ed) VM into a custom template.
There is no integrated customization of the guest available, i.e. you need to sysprep manually.
You can NOT convert a template back into a VM for easy updates. You deploy a VM from template using a full copy or a fast clone using copy on write.
|
|
Tiered VM Templates
Details
|
No (native);
Yes (with Vendor Add-On: Satellite)
There is no ability in RHV to capture or centrally apply settings to hosts (or a group of hosts) in order to ensure standard settings on deployment or to check for compliance of hosts with certain settings.
You can add the new Foreman provider in the administration portal, and use the add Hosts window to select a host provided by Foreman on Red Hat Virtualization Manager.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
=AU45
|
vApp
XenServer 6 introduced vApps - maintained with all later editions.
A vApp is logical group of one or more related Virtual Machines (VMs) which can be started up as a single entity in the event of a disaster.
The primary functionality of a XenServer vApp is to start the VMs contained within the vApp in a user predefined order, to allow VMs which depend upon one another to be automatically sequenced. This means that an administrator no longer has to manually sequence the startup of dependent VMs should a whole service require restarting (for instance in the case of a software update). The VMs within the vApp do not have to reside on one host and will be distributed within a pool using the normal rules.
This also means that the XenServer vApp has a more basic capability than e.g. VMwares vApp or MSs Service Templates which contain more advanced functions.
|
|
|
No
While RHV supports different types of storage, there is no integrated ability in RHV that allows classification of storage (e.g. by performance or other properties) in order to enable intelligent placement of workloads onto appropriate storage classes.
|
=AU46
|
No
There is no integrated capability to create host templates, apply or check hosts for compliance with certain setting.
|
|
|
Yes (Quota, Devices SLA, CPU Shares)
NEW
RHV 3.5 includes quota support and Service Level Agreement (SLA) for storage I/O bandwidth, network interfaces and CPU QoS\shares:
- Quota provides a way for the Administrator to limit the resource usage in the System. Quota provides the administrator a logic mechanism for managing resources allocation for users and groups in the Data Center.
This mechanism allows the administrator to manage, share and monitor the resources in the Data Center from the engine core point of view.
- vNIC profile allows the user to limit the inbound and outbound network traffic in virtual NIC level.
- CPU profile limits the CPU usage of a virtual machine.
- Disk profile limit the bandwidth usage to allocate the bandwidth in a better way on limited connections.
- CPU shares is a user defined number that represent a relative metric for allocating CPU capacity. It defines how often a virtual machine will get a time slice of a CPU when there is no CPU idle time.
|
=AU47
|
No
There are is no ability to associate storage profiles to tiers of storage resources in XenServer (e.g. to facilitate automated compliance storage tiering)
|
|
|
|
Other |
|
|
|
V2V, P2V & Partners
Whilst there is no integrated capability to perform physical to virtual migrations in RHV itself, Red Hat provide p2v tools to customers to export existing physical machines to a virtual infrastructure whilst ensuring that relevant changes are made to the new guest, e.g. Paravirtualization drivers.
Red Hat also partners with 3rd partys e.g. Acronis to perform v2v and p2v migrations. RHV provides the virt-v2v tool, enabling you to convert and import virtual machines created on other systems such as Xen, KVM and VMware ESX. virt-v2v is available on Red Hat Network (RHN) in the Red Hat Enterprise Virt V2V Tool, details here: http://red.ht/zFpc6i
|
=AU48
|
No
A resource pool in Citrix Hypervisor is hierarchically the equivalent of a vSphere or Hyper-V cluster. There is no functionality to sub-divide resources within a pool.
|
|
|
User Portal
RHVs web-based Power User Portal (introduced with RHV 3.0 and maintained with 3.5) is positioned by Red Hat as an entry-level Infrastructure as a Service (IaaS) user portal.
It allows the user to: create, edit and remove virtual machines, manage virtual disks and network interfaces, assign user permissions to virtual machines, create and use templates to rapidly deploy virtual machines, monitor resource usage and high-severity events, create and use snapshots to restore virtual machines to a previous state. In conjunction with the quota functionality in RHV administrators can restrict resources consumed by the users (but there is no integrated request approval or granular resource assignment based on e.g. subsets of the resources through private clouds).
|
=AU49
|
No (XenConvert: retired), V2V: yes
XenConvert was retired in XenServer 6.2
XenConvert allowed conversion of a single physical machine to a virtual machine. The ability to do this conversion is included in the Provisioning Services (PVS) product shipped as part of XenDesktop. Alternative products support the transition of large environments and are available from PlateSpin.
Note: The XenServer Conversion Manager, for converting virtual machines (V2V), remains fully supported and available for Enterprise edition only.
Background:
XenConvert supported the following sources: physical (online) Windows systems OVF, VHD, VMDK or XVA onto these targets: XenServer directly, vhd, OVF or PVS vdisk
|
|
Self Service Portal
Details
|
No (native);
Yes (with Vendor Add-On: CloudForms)
There is no orchestration tool/engine provided with RHV.
Use Red Hat CloudForms (fee-based vendor add-on) to provide orchestration.
This functionality is achieved with Red Hat Cloudforms (Fee-Based Add-ON), now part of RHCI. With CloudForms, resources are automatically and optimally used via policy-based workload and resource orchestration, ensuring service availability and performance. You can simulate allocation of resources for what-if planning and continuous insights into granular workload and consumption levels to allow chargeback, showback, and proactive planning and policy creation. For details: http://red.ht/1h7DR9T.
|
=AU50
|
No (Web Self Service: retired in XS6.2)
Web Self Service was a lightweight portal which allowed individuals to operate their own virtual machines without having administrator credentials to the Citrix Hypervisor host.
|
|
Orchestration / Workflows
Details
|
SELinux, iptables, VLANs, Port Mirroring
The RHV Hypervisor has various security features enabled. Security Enhanced Linux (SELinux) and the iptables firewall are fully configured and on by default. SELinux and sVirt adds security policy in kernel for effective intrusion detection, isolation and containment (SELinux is essentially a set of patches to the Linux kernel and some utilities to incorporate a strong, flexible mandatory access control architecture into the major subsystems of the kernel. e.g. with SELinux you can give each qemu process a different SELinux label to prevent a compromised qemu from attacking other processes and also allows you to label the set of resources that each process can see , so that a compromised qemu can only attack its own disk images.)
Advanced network security features like VLAN tagging and port mirroring are part of RHV, but there are no additional security-specific add-ons included with RHV (e.g. to address advanced fire-walling, edge security capabilities or Anti-Virus APIs).
|
=AU51
|
Yes (Workflow Studio)
Workflow Studio provides a graphical interface for workflow composition in order to reduce scripting. Workflow Studio allows administrators to tie technology components together via workflows. Workflow Studio is built on top of Windows PowerShell and Windows Workflow Foundation. It natively supports Citrix products including XenApp, XenDesktop, Citrix Hypervisor and NetScaler.
Available as component of XenDesktop suite, Workflow Studio was retired in XenDesktop 7.x
|
|
|
Agent-based (RHEL), CIM, SNMP
It is possible to use OEM vendor supplied tools, e.g. hardware monitoring utilities / agents, provided that the RHEL-based hypervisor is used.
RHV-H does not provide the tools and libraries that various tools depend upon but has basic CIM support for Systems Management. While it will not offer customizations delivered by dedicated OEM CIM providers, CIM management is available in RHV-H (RHV-H cannot be customized today to include third party CIM support). Red Hat uses the open source libcmpiutil as the CIM provider in RHV-H.
RHV-M integrates with 3rd party plug-ins that can provide systems management. For example HP OneView for Red Hat Virtualization (OVRHV) UI plug-in that that allows you to seamlessly manage your HP ProLiant infrastructure from within RHV Manager and provides actionable, valuable insight on underlying HP hardware (HP Insight Control plug-in is also available).
|
=AU52
|
Basic (NetScaler - Fee-Based Add-On)
Citrix Hypervisor uses netfilter/iptables firewalling. There are no specific frameworks or APIs for antivirus or firewall integration.
The fee-based Citrix ADC (formerly NetScaler) provides various (network) security related capabilities through e.g.
- Citrix Gateway: secure application and data access for Citrix XenApp, Citrix XenDesktop and Citrix XenMobile)
- Citrix AppFirewall: secures web applications, prevents inadvertent or intentional disclosure of confidential information and aids in compliance with information security regulations such as PCI-DSS. AppFirewall is available as a standalone security appliance or as a fully integrated module of the Citrix application delivery solution.
Details here: https://tinyurl.com/y26zjofl
|
|
Systems Management
Details
|
KVM with RHV-H or RHEL - details here
With RHV 3.5 virtualization hosts must run version 6.5, or later (of either): Red Hat Virtualization Hypervisor, or Red Hat Enterprise Linux Server.
RHV is based on the Kernel-based Virtual Machine (KVM) hypervisor and the oVirt open virtualization management platform (project started at Red Hat and released to the open source community). KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, that provides the core virtualization infrastructure and a processor specific module. The kernel component of KVM is included in mainline Linux, as of 2.6.20.
RHV hosts can be either based on full Red Hat Enterprise Linux 6 based systems with KVM enabled or purpose built RHV-H (hypervisor) hosts. RHV-H is a bare metal, image-based, small-footprint (
|
=AU53
|
Yes (API / SDKs, CIM)
Citrix Hypervisor includes a XML-RPC based API, providing programmatic access to the extensive set of XenServer management features and tools. The Citrix Hypervisor API can be called from a remote system as well as local to the Hypervisor host. Remote calls are generally made securely over HTTPS, using port 443.
Citrix Hypervisor SDK: There are five SDKs available, one for each of C, C#, Java, PowerShell, and Python. For XenServer 6.0.2 and earlier, these were provided under an open-source license (LGPL or GPL with the common linking exception). This allows use (unmodified) in both closed-and open-source applications. From XenServer 6.1 onwards the bindings are in the majority provided under a BSD license that allows modifications.
Citrix Project Kensho provided a Common Information Model (CIM) interface to the Citrix Hypervisor API and introduces a Web Services Management (WSMAN) interface to Citrix Hypervisor. Management agents can be installed and run in the Dom0 guest.
Details here: https://tinyurl.com/y6zcgmrk
|
|
|
Network and Storage
|
|
|
|
|
|
|
Storage |
|
|
Supported Storage
Details
|
Yes (FC, iSCSI)
Multipathing in RHV provides:
1) Redundancy (provides failover protection).
2) Improved Performance (spreads I/O operations over the paths, by default in a round-robin fashion but also supports other methods including Asynchronous Logical Unit Access (ALUA). This applies to block devices (FC, iSCSI), although the equivalent functionality can be achieved with a sufficiently robust network setup for network attached storage.
RHV Manage Storage Connections feature (Multipath & DR) adds the ability to add/edit/delete storage connections to enable:
- Multipathing management
- Hardware changes
- Simpler failover to remote sites for array-based replication by quickly switching to work with another storage that holds a backup/sync of the contents of the current storage in case of primary storage failure
- Supports for the following storage types: NFS, Posix, local & iSCSI.
|
=AU65
|
DAS, SAS, iSCSI, NAS, SMB, FC, FCoE, openFCoE
NEW
Citrix Hypervisor data stores are called Storage Repositories (SRs), they support IDE, SATA, SCSI (physical HBA as well as SW initiator) and SAS drives locally connected, and iSCSI, NFS, SMB, SAS and Fibre Channel remotely connected.
Background: The SR and VDI abstractions allow for advanced storage features to be exposed on storage targets that support them. For example, advanced features such as thin provisioning, VDI snapshots, and fast cloning. For storage subsystems that don’t support advanced operations directly, a software stack that implements these features is provided. This software stack is based on Microsoft’s Virtual Hard Disk (VHD) specification.
SR commands provide operations for creating, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain.
Reference: https://tinyurl.com/y222m23o
Also refer to the XenServer Hardware Compatibility List (HCL) for more details.
|
|
|
SPM and support for shared FS; manual SPM role assigment
A host known as the Storage Pool Manager (SPM) manages access between hosts and storage.
The SPM host is the only node that has full access within the storage pool; the SPM can modify the images data, and meta-data and the pools meta-data.
In RHV 3.3 ability was added to manually assigned or re-assigned the Storage Pool Manager role to hosts, using the administration portal or via the REST API.
Additionally RHV has the capability of using any shared file system that the Linux kernel supports (support varies depending on vendor and implementation). For example, IBMs GPFS could be used as a backing store for virtual machines, configured as a POSIX file system for the Storage Domain. Other shared file systems like GFS, future versions of GlusterFS (Red Hat Storage) are considerations, as well. This provides maximum flexibility between traditional NAS (NFS), direct-to-block (standard RHV FC/iSCSI/SAS) and shared file systems (POSIX).
Background: Prior to Red Hat Virtualization 3.1, SPM exclusivity (ensuring the existence of a single arbitrator to avoid data corruption) was maintained and tracked using a feature called safelease. Safe lease only maintained exclusivity of one resource, the SPM role.
Sanlock provides the same functionality, but treats the SPM role as one of the resources that can be locked while allowing additional resources to be locked. Applications that require resource locking can register with Sanlock.
|
=AU66
|
Yes
NEW
Dynamic multipathing support is available for Fibre Channel and iSCSI storage arrays (round robin is default balancing mode). Citrix Hypervisor also supports the LSI Multi-Path Proxy Driver (MPP) for the Redundant Disk Array Controller (RDAC) - by default this driver is disabled.
|
|
Shared File System
Details
|
Yes
Booting from SAN is possible, provided that the HBA(s) present this ability to the BIOS. In other words, provided the OS can be booted via the LUN(s) provided by the HBA(s) then there should be no problem. When extra LUN(s) are presented to the hypervisors to use as shared storage, RHV will only allow un-used LUNs to be accepted into the storage domain, therefore its possible to both boot from SAN and use the same storage paths for shared storage in RHV.
|
=AU67
|
Yes (SR)
Citrix Hypervisor uses the concept of Storage Repositories (disk containers/data stores). These SRs can be shared between hosts or dedicated to particular hosts. Shared storage is pooled between multiple hosts within a defined resource pool. All hosts in a single resource pool must have at least one shared SR in common. NAS, iSCSI (Software initiator and HBA are both supported), SAS, or FC are supported for shared storage.
|
|
|
Yes
The RHV-H hypervisor is able to be installed onto USB storage devices or solid state disks. (The initial boot/install USB device must be a separate device from the installation target).
|
=AU68
|
Yes (iSCSI, FC, FCoE)
Boot from SAN depends on SAN-based disk arrays with either hardware Fibre Channel or HBA iSCSI adapter support on the host. For a fully redundant boot from SAN environment, you must configure multiple paths for I/O access. To do so, ensure that the root device has multipath support enabled. For information about whether multipath is available for your SAN environment, consult your storage vendor or administrator. If you have multiple paths available, you can enable multipathing in your Citrix Hypervisor deployment upon installation.
XenServer 7.x adds Software-boot-from-iSCSI for Cisco UCS
Yes for XenServer 6.1 and later (XenServer 5.6 SP1 added support for boot from SAN with multi-pathing support for Fibre Channel and iSCSI HBAs)
Note: Rolling Pool Upgrade should not be used with Boot from SAN environments. For more information on upgrading boot from SAN environments see Appendix B of the XenServer 7.5 Installation Guide: http://bit.ly/2sK4wGn
|
|
|
RAW, Qcow2
RHV supports two storage formats: RAW and QCOW2
In an NFS data center the Storage Pool manager (SPM) creates the virtual disk on top of a regular file system as a normal disk in preallocated (RAW) format. Where sparse allocation is chosen additional layers on the disk will be created in thinly provisioned Qcow2 (sparse) format.
For iSCSI and SAN (block), the SPM creates a Volume group (VG) on top of the Logical Unit Numbers (LUNs) provided. During the virtual disk creation, either a preallocated format (RAW) or a thinly provisioned Qcow2 (sparse) format is created
Background: QCOW (QEMU copy on write) decouples the physical storage layer from the virtual layer by adding a mapping between logical and physical blocks. This enables advanced features like snapshots. Creating a new snapshot creates a new copy on write layer, either a new file or logical volume, with an initial mapping that points all logical blocks to the offsets in the backing file or volume. When writing to a QCOW2 volume, the relevant block is read from the backing volume, modified with the new information and written into the new snapshot QCOW2 volume. T hen the map is updated to point to the new place.
Benefits QCOW2 offers over using RAW representation include:
- Copy-on-write support (volume only represents changes to a disk image).
- Snapshot support (volume can represent multiple snapshots of the images history).
RAW
The RAW storage format has a performance advantage over QCOW2 as no formatting is applied to images stored in the RAW format (reading and writing images stored in RAW requires no additional mapping or reformatting work on the host or manager. When the guest file system writes to a given offset in its virtual disk, the I/O will be written to the same offset on the backing file or logical volume. Note: Raw format requires that the entire space of the defined image be preallocated (unless using externally managed thin provisioned LUNs from a storage array).
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (Qcow2) format. Thin provisioning takes significantly less time to create a virtual disk. T he thin provision format is suitable for non-IO intensive virtual machines.
Reference here: http://red.ht/1hOwQLz
|
=AU69
|
No
While there are several (unofficial) approaches documented, officially no flash drives are supported as boot media for Citrix Hypervisor.
|
|
Virtual Disk Format
Details
|
Default max virtual disk size is 8TB (but its configurable in RHV DB)
The default maximum supported virtual disk size is 8TB in RHV 3.5 (but its configurable in RHV DB)
With the added the virtio-scsi support in 3.3, Red Hat also supports now 16384 logical units per target.
File level disk size remains unlimited by VDSM, the limits of the underlying filesystem do however apply.
|
=AU70
|
vhd, raw disk (LUN)
Citrix Hypervisor supports file based vhd (NAS, local), block device-based vhd (FC, SAS, iSCSI) using a Logical Volume Manager on the Storage Repository or a full LUN (raw disk)
|
|
|
Yes
During the virtual disk creation, either a preallocated format (RAW) or a thinly provisioned Qcow2 (sparse) format can be specified.
A preallocated virtual disk has reserved storage of the same size as the virtual disk itself. The backing storage device (file/block device) is presented as is to the virtual machine with no additional layering in between. This results in better performance because no storage allocation is required during runtime. On SAN (iSCSI, FCP) this is achieved by creating a block device with the same size as the virtual disk. On NFS this is achieved by filling the backing hard disk image file with zeros. Pre-allocating storage on an NFS storage domain presumes that the backing storage is not Qcow2 formatted and zeroes will not be deduplicated in the hard disk image file. (If these assumptions are incorrect, do not select Preallocated for NFS virtual disks).
For sparse virtual disks backing storage is not reserved and is allocated as needed during runtime. This allows for storage over commitment under the assumption that most disks are not fully utilized and storage capacity can be utilized better. This requires the backing storage to monitor write requests and can cause some performance issues. On NFS backing storage is achieved simply by using files. On SAN this is achieved by creating a block device smaller than the virtual disks defined size and communicating with the hypervisor to monitor necessary allocations. This does not require support from the underlying storage devices.
|
=AU71
|
2TB (16TB with GFS2)
For Citrix Hypervisor 8 the maximum virtual disk sizes are:
- NFS: 2TB minus 4GB
- LVM (block): 2TB minus 4 GB
- GFS2: 16T
Reference: https://tinyurl.com/y5o67fdj
|
|
Thin Disk Provisioning
Details
|
Yes (via Hooks)
N_Port ID Virtualization (NPIV) is a function available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Host Bus Adaptors (HBAs) that SR-IOV provides for network interfaces. With NPIV, virtualized guests can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs).
Note: NPIV is supported via the scripted hooks mechanism, in which many additional advanced features can be implemented. Future versions of RHV will include access to such features via the graphical user interface.
|
=AU72
|
Yes
Thin provisioning for shared block storage is of particular interest in the following cases:
- You want increased space efficiency. Images are sparsely and not thickly allocated.
- You want to reduce the number of I/O operations per second on your storage array. The GFS2 SR is the first SR type to support storage read caching on shared block storage.
- You use a common base image for multiple virtual machines. The images of individual VMs will then typically utilize even less space.
- You use snapshots. Each snapshot is an image and each image is now sparse.
- Your storage does not support NFS and only supports block storage. If your storage supports NFS, we recommend you use NFS instead of GFS2.
- You want to create VDIs that are greater than 2 TiB in size. The GFS2 SR supports VDIs up to 16 TiB in size.
The shared GFS2 type represents disks as a filesystem created on an iSCSI or HBA LUN. VDIs stored on a GFS2 SR are stored in the QCOW2 image format.
|
|
|
Yes
In RHV by default, virtual machines created from templates use thin provisioning. In the context of templates, thin provisioning of vm means copy on write (aka linked clone or difference disk) rather than a growing file system that only takes up the storage space that it actually uses (usually referred to as thin provisioning). All virtual machines based on a given template share the same base image as the template and must remain on the same data domain as the template.
You can however specify to deploy the vm from template as clone - which means that a full copy of the vm will be deployed. When selecting to clone you can then select thin (sparse) or pre-allocated provisioning of the full clone. Deploying from template as clone results in independence from the base image but space savings associated with using copy on write approaches are lost.
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (Qcow2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-IO intensive virtual machines.
|
=AU73
|
No
There is no NPIV support for Citrix Hypervisor
|
|
|
No (native);
Yes (with Vendor Add-On: Red Hat Storage)
There is no native software based replication included in the base RHV product.
However, there is support for managing Red Hat Storage volumes and bricks using Red Hat Virtualization Manager. Red Hat Storage is a software-only, scale-out storage solution that provides flexible unstructured data storage for the enterprise.
Red Hat Storage Console (RHS-C) of Red Hat Storage Server (RHS) for On-Premise provides replication via the native capabilities of RHS, with integration in the RHV-M interface.
RHS-C extends RHV-M 3.x and oVirt Engine technology to manage Red Hat Trusted Storage Pools with management via the Web GUI, REST API and (future) remote command shell.
Note that Red Hat Storage is a fee-based add-on.
A Getting Started with Red Hat Virtualization 3.4 and Red Hat Storage 3 guide is avaiable here: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Storage/Red_Hat_Storage-3-Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Storage-en-US.pdf
|
=AU74
|
Yes - Clone on boot, clone, PVS, MCS
XenServer 6.2 introduced Clone on Boot
This feature supports Machine Creation Services (MCS) which is shipped as part of XenDesktop. Clone on boot allows rapid deployment of hundreds of transient desktop images from a single source, with the images being automatically destroyed and their disk space freed on exit.
General cloning capabilities: When cloning VMs based off a single VHD template, each child VM forms a chain where new changes are written to the new VM, and old blocks are directly read from the parent template. When this is done with a file based vhd (NFS) then the clone is thin provisioned. Chains up to a depth of 30 are supported but beware performance implications.
Comment: Citrixs desktop virtualization solution (XenDesktop) provides two additional technologies that use image sharing approaches:
- Provisioning Services (PVS) provides a (network) streaming technology that allows images to be provisioned from a single shared-disk image. Details: https://tinyurl.com/yy37zxvt
- With Machine Creation Services (MCS) all desktops in a catalog will be created off a Master Image. When creating the catalog you select the Master and choose if you want to deploy a pooled or dedicated desktop catalog from this image.
Note that neither PVS (for virtual machines) or MCS are included in the base XenServer license.
|
|
SW Storage Replication
Details
|
FS-Cache
FS-Cache is a persistent local cache that can be used by file systems to take data retrieved from over the network and cache it on local disk.
This helps minimize network traffic for users accessing data from a file system mounted over the network (for example, NFS).
|
=AU75
|
No
There is no integrated (software-based) storage replication capability available within XenServer
|
|
|
No (native);
Yes (with Vendor Add-On: Red Hat Storage)
Red Hat Storage Console (RHS-C) of Red Hat Storage Server (RHS) for On-Premise provides replication via the native capabilities of RHS, with integration in the RHV-M interface.
RHS-C extends RHV-M 3.x and oVirt Engine technology to manage Red Hat Trusted Storage Pools with management via the Web GUI, REST API and (future) remote command shell.
Note that Red Hat Storage is a fee-based add-on.
A Getting Started with Red Hat Virtualization 3.4 and Red Hat Storage 3 guide is avaiable here:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Storage/Red_Hat_Storage-3-Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Storage-en-US.pdf
|
=AU76
|
IntelliCache
XenServer 6.5 introduced a read caching feature that uses host memory in the new 64-bit Dom0 to reduce IOPS on storage networks, improve LoginVSI scores with VMs booting up to 3x Faster. The read cache feature is available for XenDesktop & XenApp Platinum users who have an entitlement to this feature.
Within XenServer 7.0 LoginVSI scores of 585 have been attained (Knowledge Worker workload on Login VS 4.1)
IntelliCache is a Citrix Hypervisor feature that can (only!) be used in a XenDesktop deployment to cache temporary and non-persistent operating-system data on the local XenServer host. It is of particular benefit when many Virtual Machines (VMs) all share a common OS image. The load on the storage array is reduced and performance is enhanced. In addition, network traffic to and from shared storage is reduced as the local storage caches the master image from shared storage.
IntelliCache works by caching data from a VMs parent VDI in local storage on the VM host. This local cache is then populated as data is read from the parent VDI. When many VMs share a common parent VDI (for example by all being based on a particular master image), the data pulled into the cache by a read from one VM can be used by another VM. This means that further access to the master image on shared storage is not required.
Reference: https://tinyurl.com/yyjqom4p
|
|
|
Third-party plug-in framework - e.g. NetApp Virtual Console
There is no specific storage API (similar to e.g. VMwares VAAI / VASA) in RHV that utilizes array specific capabilities for virtualization purposes (e.g. offload, storage classification etc.). However RHVs REST API does allow storage specific calls. The APIs are continually being enhanced in the community.
RHV 3.2 introduced a plug-in framework (maintained with 3.5). This enables third parties to integrate new features and actions directly into the RHV Manager user interface. New menu items, panes, and dialog boxes allow users to access the new functionality the same way they use Red Hat Virtualizations native functionality. One example is the NetApp Virtual Storage Console (VSC). VSC provides integrated virtual storage management, including rapid domain provisioning and cloning of hundreds of virtual machines, while enabling Red Hat administrators to access and execute all of these capabilities using the standard RHV management interface.
RHV 3.3 introduced a Backup API can also be leveraged with array cloning & replication for DR.
|
=AU77
|
No
There is no specific Storage Virtualization appliance capability other than the abstraction of storage resources through the hypervisor.
|
|
Storage Integration (API)
Details
|
Yes (Quota, Storage I/O SLA)
NEW
RHV 3.5 includes quota support and Service Level Agreement (SLA) for storage I/O bandwidth:
- Quota provides a way for the Administrator to limit the resource usage in the system including vDisks. Quota provides the administrator a logic mechanism for
managing disks size allocation for users and groups in the Data Center.
- Disk profile limit the bandwidth usage to allocate the bandwidth in a better way on limited connections.
|
=AU78
|
Integrated StorageLink (deprecated)
Integrated StorageLink was retired in XenServer 6.2.
Background: XenServer 6 introduced Integrated StorageLink Capabilities. It replaces the StorageLink Gateway technology used in previous editions and removes the requirement to run a VM with the StorageLink components. It provides access to use existing storage array-based features such as data replication, de-duplication, snapshot and cloning. Citrix StorageLink allows to integrate with existing storage systems, gives a common user interface across vendors and talks the language of the storage array i.e. exposes the native feature set in the storage array. StorageLink also provides a set of open APIs that link XenServer and Hyper-V environments to third party backup solutions and enterprise management frameworks.
|
|
|
Neutron Integration (Tech Preview) + various new enhancements - click for details
NEW
New in RHV 3.3. is the initial (tech preview) support for OpenStack Neutron as a network provider on Red Hat Virtualization Manager. OpenStack Neutron can provide networking capabilities for consumption by hosts and virtual machines.
The integration includes:
• Advanced engine for network configuration.
• open vSwitch distributed virtual switching support.
• Ability to centralize network configurations with Red Hat Enterprise Linux OpenStack Platform (not included).
RHV 3.4 extended the integration with Neutron as follows:
• Leverage Neutron IPAM by configuring subnets on external networks.
• Apply Neutron security groups to VM interfaces by assigning the 'SecurityGroups' custom property to relevant vNIC Profiles.
RHV 3.5 introduces a Neutron virtual appliance, to more easily get up-and-running with a Neutron server.
Also new in 3.3. are the following network features:
- simplified management network setup procedure
- New migration network role ( grant the migration role to a cluster network, separating migration data to the designated migration network, to prevent migration traffic from choking other networks)
- Virtual machine NIC-specific parameters (this enables a range of connection options, including: Create a host NIC via Mellanox UFM and connect it directly to a virtual NIC; use OpenStacks Neutron to connect a virtual NIC to one of its defined networks; pass non-standard quality of service (QoS) settings for a virtual NIC)
- Network QoS using virtual NIC profiles (users can now limit the inbound and outbound network traffic on a virtual NIC level by applying profiles which define attributes such as port mirroring, quality of service (QoS) or custom properties)
- Multiple network gateways per host (define a gateway for each logical network on a host)
- Refresh host network configuration (allows the administrator to obtain updated network configuration such as available NICs)
- Improved bond support (add new bonds from the administration portal, in addition to the five predefined bonds for each host
New in 3.4:
- Automatic propagation of network attribute changes to active hypervisors (instead of having to synchronize each host manually).
- Network labels to ease complex hypervisor networking configurations, comprising many networks.
- Predictable vNIC ordering inside guest OS for newly-created VMs.
- Hypervisors now recognize hotplugged network interfaces.
New in 3.5:
- Notifications in case of bond/NIC changing link state (e.g. link failure).
- Ability to configure custom properties on hypervisor network devices; specifically configuring advanced bridge and ethtool options.
- Support for bigger MAC address ranges.
- Dedicated network connectivity log on hypervisors to ease investigation in case of 'disaster'.
- Add a warning when unplugging a vNIC.
- Add warnings when meddling with the display network.
- Properly display arbitrarily-named hypervisor VLAN devices in the management console.
Advanced networking can also be achieved with RHEL OpenStack Platform. Starting with Red Hat Enterprise Linux 6.4, the Open vSwitch kernel module is included as an enabler for Red Hat Enterprise Linux OpenStack Platform. Open vSwitch is only supported in conjunction with Red Hat products containing the accompanying user space packages. Without theses packages, Open vSwitch will not function.
|
=AU79
|
Basic
Virtual disks on block-based SRs (e.g. FC, iSCSI) have an optional I/O priority Quality of Service (QoS) setting. This setting can be applied to existing virtual disks using the xe CLI.
Note: Bare in mind that QoS setting are applied to virtual disks accessing the LUN from the same host. QoS is not applied across hosts in the pool!
|
|
|
|
Networking |
|
|
Advanced Network Switch
Details
|
Yes
RHV 3.0 had significantly enhanced the GUI to simplify the network setup and also introduced extended bonding (NIC teaming) support that are maintained in RHV 3.1: 1) Active / Backup, 2) Load Balancing, 4) Link aggregation, 5) Adaptive transmit, Custom (Specify advanced options) - there is no Mode 3 listed
RHV 3.3 added improved bond support (add new bonds from the administration portal, in addition to the five predefined bonds for each host.
Details:
Mode 1 (active-backup policy) sets all interfaces to the backup state while one remains active. Upon failure on the active interface, a backup interface replaces it as the only active interface in the bond. The MAC address of the bond in mode 1 is visible on only one port (the network adapter), to prevent confusion for the switch. Mode 1 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 2 (XOR policy) selects an interface to transmit packages to based on the result of an XOR operation on the source and destination MAC addresses multiplied by the modulo slave count. This calculation ensures that the same interface is selected for each destination MAC address used.
Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Virtualization.
Mode 4 (IEEE 802.3ad policy) creates aggregation groups for which included interfaces share the speed and duplex settings. Mode 4 uses all interfaces in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Virtualization.
Mode 5 (adaptive transmit load balancing policy) ensures the outgoing traffic distribution is according to the load on each interface and that the current interface receives all incoming traffic. If the interface assigned to receive traffic fails, another interface is assigned the receiving role instead. Mode 5 is
supported in Red Hat Virtualization.
|
=AU80
|
Yes (Open vSwitch) - vSwitch Controller
The Open vSwitch is fully supported
The vSwitch brings visibility, security, and control to XenServer virtualized network environments. It consists of a virtualization-aware switch (the vSwitch) running on each XenServer and the vSwitch Controller, a centralized server that manages and coordinates the behavior of each individual vSwitch to provide the appearance of a single vSwitch.
The vSwitch Controller supports fine-grained security policies to control the flow of traffic sent to and from a VM and provides detailed visibility into the behavior and performance of all traffic sent in the virtual network environment. A vSwitch greatly simplifies IT administration within virtualized networking environments, as all VM configuration and statistics remain bound to the VM even if it migrates from one physical host in the resource pool to another.
Details in the Citrix Hypervisor docs: https://tinyurl.com/y6nq33o4
|
|
|
Yes
With RHV the Network Interfaces tab of the details pane shows VLAN information for the edited network interface. In the VLAN column newly created VLAN devices are shown, with names based on the network interface name and VLAN tag.
Background: RHV VLAN aware and able to tag and redirect VLAN traffic, however VLAN implementation requires a switch that supports VLANs.
At the switch level, ports are assigned a VLAN designation. A switch applies a VLAN tag to traffic originating from a particular port, marking the traffic as part of a VLAN, and ensures that responses carry the same VLAN tag. A VLAN can extend across multiple switches. VLAN tagged network traffic on a switch is completely undetectable except by machines connected to a port designated with the correct VLAN. A given port can be tagged into multiple VLANs, which allows traffic from multiple VLANs to be sent to a single port, to be deciphered using software on the machine that receives the traffic.
|
=AU81
|
Yes
XenServer 6.1 added the following functionality, maintained with XenServer 6.5 and later:
- Link Aggregation Control Protocol (LACP) support: enables the use of industry-standard network bonding features to provide fault-tolerance and load balancing of network traffic.
- Source Load Balancing (SLB) improvements: allows up to 4 NICs to be used in an active-active bond. This improves total network throughput and increases fault tolerance in the event of hardware failures. The SLB balancing algorithm has been modified to reduce load on switches in large deployments.
Background:
XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:
• LACP bonding is only available for the vSwitch, while active-active and active-passive are available for both the vSwitch and Linux bridge.
• When the vSwitch is the network stack, you can bond either two, three, or four NICs.
• When the Linux bridge is the network stack, you can only bond two NICs.
XenServer 6.1 provides three different types of bonds, all of which can be configured using either the CLI or XenCenter:
• Active/Active mode, with VM traffic balanced between the bonded NICs.
• Active/Passive mode, where only one NIC actively carries traffic.
• LACP Link Aggregation, in which active and stand-by NICs are negotiated between the switch and the server.
Reference: https://tinyurl.com/y2qbordz
|
|
|
No
There is no PVLAN support in RHV
|
=AU82
|
Yes
NEW
VLANs are supported with XenServer. Switch ports configured as 802.1Q VLAN trunk ports can be used with the Citrix Hypervisor VLAN features to connect guest virtual network interfaces (VIFs) to specific VLANs. In this case, the Citrix Hypervisor server performs the VLAN tagging/untagging functions for the guest, which is unaware of any VLAN configuration.
Citrix Hypervisor VLANs are represented by additional PIF objects representing VLAN interfaces corresponding to a specified VLAN tag. You can connect Citrix Hypervisor networks to the PIF representing the physical NIC to see all traffic on the NIC. Alternatively, connect networks to a PIF representing a VLAN to see only the traffic with the specified VLAN tag. You can also connect a network such that it only sees the native VLAN traffic, by attaching it to VLAN 0.
Please refer to https://tinyurl.com/y4wqtsld
|
|
|
Guests fully, hypervisors partially
RHV currently uses IPv4 for internal communications and does not use/support IPv6. Using IPv6 at the virtual machine level is fully supported though, provided that youre using a guest operating system thats compatible.
With RHV 3.2 the manager UI now reports IPV6 addresses in addition to IPV4 addresses
RHV 3.5 introduces network custom properties, which may be used to assign IPV6 addresses to host interfaces using the vdsm-hook-ipv6 hook; see https://www.mail-archive.com/users@ovirt.org/msg22375.html.
|
=AU83
|
No
XenServer does not support PVLANs.
Please refer to https://tinyurl.com/y4wqtsld
|
|
|
Yes (with hooks)
Hooks (ability to launch pre-built scripts) allow for advanced KVM technology to be supported from the Red Hat Virtualization Manager interface. Pre-built hooks include SR/IOV: Allows bypassing the hypervisor for certain network and disk I/O for near-native speed. There is no capability to configure SR-IOV devices directly through the RHV-M interface though (other than launching the hook).
Comment: Evaluated amber - as Sample scripts have been developed that include SR-IOV configuration.
Please note: The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on Red Hat Enterprise Linux virtualization hosts. The use of VDSM Hooks on virtualization hosts running Red Hat Virtualization Hypervisor (RHV-H) is not currently supported.
|
=AU84
|
Yes (guests only)
Guest VMs hosted on Citrix Hypervisor can use any combination of IPv4 and IPv6 configured addresses.
However, Citrix Hypervisor doesn’t support the use of IPv6 in its Control Domain (Dom0). You can’t use IPv6 for the host management network or the storage network. IPv4 must be available for the Citrix Hypervisor host to use.
|
|
|
Yes
The network management UI in RHV allows you to set the MTU for network interfaces (jumbo frames).
|
=AU85
|
SR-IOV
NEW
Experimental support for SR-IOV-capable networks cards delivers high performance networking for virtual machines, catering for workloads that need this type of direct access.
|
|
|
TOE
Currently, RHV hypervisors support TOE. Due to the way that RHV provides networking access to the virtual machines, using technologies such as TSO/LRO/GRO are currently unsupported. This is set to change when Open vSwitch is supported in upcoming versions.
|
=AU86
|
Yes
You can set the Maximum Transmission Unit (MTU) for a XenServer network in the New Network wizard or for an existing network in its Properties window. The possible MTU value range is 1500 to 9216.
|
|
|
Yes (vNIC Profile)
RHV 3.3. added the ability to control Network QoS using virtual NIC profiles through the RHV-M interface.
Users can now limit the inbound and outbound network traffic on a virtual NIC level by applying profiles which define attributes such as port mirroring, quality of service (QoS) or custom properties.
|
=AU87
|
Yes (TSO)
TCP Segmentation Offload can be enabled, see https://tinyurl.com/y3d7ktyg
By default, Large Receive Offload (LRO) and Generic Receive Offload (GRO) are disabled on all physical network interfaces. Though unsupported, you can enable it manually https://tinyurl.com/ychkx3cn
|
|
|
Yes (Port Mirroring); toggle through vNIC profile
RHV 3.1 introduced port mirroring capabilities (maintained in 3.5).
It is now possible to configure the virtual Network Interface Card (vNIC) of a virtual machine to run in promiscuous mode. This allows the virtual machine to monitor all traffic to other vNICs exposed by the host on which it runs. Port mirroring copies layer 3 network traffic on given a logical network and host to a virtual interface on a virtual machine. This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network.
RHV 3.3. adds the ability to create a Virtual Network Interface Controller (VNIC) profile to toggle port monitoring. There are also the vendor-supplied UI plug-ins to RHV-M, e.g. the Nagios community plugin.
|
=AU88
|
Yes (outgoing)
QoS of network transmissions can be done either at the vm level (basic) by setting a ks/sec limit for the virtual NIC or on the vSwitch-level (global policies). With the DVS you can select a rate limit (with units), and a burst size (with units). Traffic to all virtual NICs included in this policy level (e.g. you can create vm groups) is limited to the specified rate, with individual bursts limited to the specified number of packets. To prevent inheriting existing enforcement the QoS policy at the VM level should be disabled.
Background:
To limit the amount of outgoing data a VM can send per second, you can set an optional Quality of Service (QoS) value on VM virtual interfaces (VIFs). The setting lets you specify a maximum transmit rate for outgoing packets in kilobytes per second.
The QoS value limits the rate of transmission from the VM. As with many QoS approaches the QoS setting does not limit the amount of data the VM can receive. If such a limit is desired, Citrix recommends limiting the rate of incoming packets higher up in the network (for example, at the switch level).
Depending on networking stack configured in the pool, you can set the Quality of Service (QoS) value on VM virtual interfaces (VIFs) in one of two places-either a) on the vSwitch Controller or b) in Citrix Hypervisor (using theCLI or XenCenter).
|
|
Traffic Monitoring
Details
|
Yes (virtio), Mem Balloon optimization and error messages
The virtio-balloon driver allows guests to express to the hypervisor how much memory they require. The balloon driver allows the host to efficiently allocate and memory to the guest and allow free memory to be allocated to other guests and processes. Guests using the balloon driver can mark sections of the guests RAM as not in use (balloon inflation). The hypervisor can free the memory and use the memory for other host processes or other guests on that host. When the guest requires the freed memory again, the hypervisor can reallocate RAM to the guest (balloon deflation).
This includes:
- Memory balloon optimization (Users can now enable virtio-balloon for memory optimization on clusters. All virtual machines on cluster level 3.2 and higher includes a balloon device, unless specifically removed. When memory balloon optimization is set, MoM will start ballooning to allow memory overcommitment, with the limitation of the guaranteed memory size on each virtual machine.
- Ballooning error messages (When ballooning is enabled for a cluster, appropriate messages now appear in the Events tab)
|
=AU89
|
Yes (Port Mirroring)
The Citrix Hypervisor vSwitch has Traffic Mirroring capabilities. The Remote Switched Port Analyzer (RSPAN) policies support mirroring traffic sent or received on
a VIF to a VLAN in order to support traffic monitoring applications. Use the Port Configuration tab in the vSwitch Controller UI to configure policies that apply to the VIF ports.
|
|
|
Hypervisor
|
|
|
|
|
|
|
General |
|
|
Hypervisor Details/Size
Details
|
No limit stated
The RHV Hypervisor is not limited by a fixed technology (or marketed) restriction. Red Hat lists no Limit for the maximum ratio of virtual CPUs per host.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/chap-Overcommitting_with_KVM.html
In reality the specifications of the underlying hardware, nature of the workload in the vm and the overall restriction of 160 logical CPUs per host will determine the limit. RHV has been publicly demonstrated (see SPECvirt) to run over 550 VMs with a mix of SMP vCPU VMs showing that they can scale beyond the previously (RHV 3.0) supported 512 vCPUs per host.
|
=AU54
|
Citrix Hypervisor 8: Xen 4.11 -based
NEW
Citrix Hypervisor is based on the open-source Xen hypervisor. Citrix Hypervisor 8 runs on the Xen 4.11 hypervisor. This release contains mitigations for the Meltdown and Spectre vulnerabilities. https://tinyurl.com/y88ken4k
Citrix Hypervisor uses paravirtualization and hardware-assisted virtualization, requiring either a modified guest OSs or hardware assisted CPUs (more commonly seen as its less restrictive and hardware assisted CPUs like Intel VT/AMD-V have become standard). The device drivers are provided through a 64-bit Linux-based guest (CentOS) running in a control virtual machine (Dom0).
|
|
|
|
Host Config |
|
|
Max Consolidation Ratio
Details
|
160 (logical)
With 8 socket with 10 cores each and Hyper-Threading enabled you can use up to 160 logical CPUs.
|
=AU55
|
1000 VMs per host
https://tinyurl.com/y5o67fdj
|
|
|
unlimited
there is no license restriction for the max number of cores per CPU
|
=AU56
|
288 (logical)
Citrix Hypervisor supports up to 288 Logical CPUs (threads) e.g. 8 socket with 18 cores each and Hyper-Threading enabled = 288 logical CPUs
|
|
Max Cores per CPU
Details
|
4TB
Max amount of physical RAM installed in host and recognized by RHV is 4TG (4000GB).
|
=AU57
|
unlimited
The Citrix Hypervisor license does not restrict the number of cores per CPU
|
|
Max Memory / Host
Details
|
160 vCPU per VM
The maximum supported number of virtual CPUs per vm (please note that the actual number depends on the type of the guest operating system (i.e. technical workstation and older OSs might not support the maximum number).
Maximum topology limits are 16 sockets and 16 cores/socket.
|
=AU58
|
5TB
Citrix Hypervisor supports a maximum of 5TB per host, if a host has one or more paravirtualized guests (Linux VMs) running then a maximum of 128 GB RAM is supported on the host
|
|
|
|
VM Config |
|
|
|
4TB
Maximum amount of configured virtual RAM for an individual vm is 4TB (4000GB).
|
=AU59
|
32
Citrix Hypervisor supports 32vCPUs for Windows and Linux VMs. Actual numbers vary with the guest OS version (e.g. license restrictions).
|
|
|
No (serial console via hooks possible)
Presence of the serial port can be configured per virtual machine. But there is no interface to access them.
However VDSM hooks can be used to get serial console access to virtual machine consoles (where graphical access is not possible) - http://www.ovirt.org/Features/Serial_Console_in_CLI
|
=AU60
|
1.5TB
A maximum of 1.5TB is supported for guest OSs, actual number varies greatly with guest OS version so please check for specific guest support.
The maximum amount of physical memory addressable by your operating system varies. Setting the memory to a level greater than the operating system supported limit, may lead to performance issues within your guest. Some 32-bit Windows operating systems can support more than 4 GB of RAM through use of the physical address extension (PAE) mode. The limit for 32-bit PV Virtual Machines is 64GB. Please consult your guest operating system Administrators Guide and the Citrix Hypervisor Virtual Machine Users Guide for more details.
|
|
|
Via remote-connection client or hooks
You can not pass-through any host USB devices directly to the virtual machine with RHV-H. If utilizing RHEL with KVM as your hypervisor, customization hooks can be leveraged to directly attached devices to the VM from the host.
However, using RHVs technical workstation virtualization solution (RHV-D) you can pass-through USB devices from client devices (e.g. technical workstation) via the SPICE protocol capabilities.
|
=AU61
|
No
You can not configure serial ports (as virtual hardware) for your VM
|
|
|
Yes (disk, NIC, CPU)
NEW
RHV 3.1 added the ability to hot-add both Network Interface Cards as well as Virtual Disk Storage in the virtual machine (maintained with 3.5). New in 3.5 is hot-add CPU (but not removal). The RHV roadmap has publicly disclosed intentions to support hot-plug of memory in the future.
|
=AU62
|
Yes
Citrix Hypervisor includes USB pass-through support. USB pass-through enables more use cases for your virtual environment, such as accessing USB storage devices directly from within the VMs, and running software that requires USB licensing dongles.
|
|
|
No
no GPU acceleration available
|
=AU63
|
Yes (disk, NIC)
Citrix Hypervisor supports adding of disks and network adapters while the VM is running - hot plug requires the specific guest OS to support these functions - please check for specific support with your OS vendor.
|
|
Graphic Acceleration
Details
|
DAS, iSCSI, NFS (v4), FC, FCoE, SAS, POSIX;
Virtio SCSI, GlusterFS support, online vDisk resize
NEW
RHV 3.5 added the following Storage features:
- Detaching and moving data domains, with their contained disks and associated VMs between Data Centers or even RHV setups.
- Customizing mount options for NFS domains.
RHV 3.4 added the following Storage features:
- Support for read-only VM disks.
- Mixing domains of different types (e.g. NFS and iSCSI) in a single Data Center.
RHV 3.3 added the following Storage features:
- Virtio-SCSI support (Virtio-SCSI is a new para-virtualized SCSI controller device which provides similar performance as the virtio-blk device, while improving scalability, supporting standard SCSI command sets and device naming, allowing for SCSI device passthrough)
- GlusterFS support (RHV supports native GlusterFS-based storage domains and data center types)
- Enable online virtual drive resize (Resize virtual disks while they are in use by one or more virtual machines without the need of pausing, hibernating or rebooting the guests, by specifying an Extend size by(GB) value. Disks Block Alignment scan (This feature provides a way to find the virtual disks with misaligned partitions, check if a virtual disk, the filesystem installed on it, and its underlying storage are aligned).
- Disk Hook (Adding VDSM hooking points before and after disks hot plug and hot unplug. The feature allows users to add their own functionality before hot-plugging and hot-unplugging disks).
Besides the full support for live Storage Migration RHV 3.2 added a few minor storage related functions, including the ability to perform a storage domain scan (Scan storage domain for new orphaned images and import images into storage domain) and the ability to delete VMs without deleting the virtual disks.
New in RHV 3.1 was the ability to attach any block device to a virtual machine (Direct LUN Support), the directlun VDSM hook script is no longer required perform this task. Another new feature is the ability to configure Cross Storage Domain Virtual Machines i.e. a virtual machine which has disks on multiple different storage domains. Previously all disks for a virtual machine had to be stored on the same storage domain. Support for NFS 4 had also been added.
In RHV storage is logically grouped into storage pools, which are comprised of three types of storage domains: data (vm and snap shots) , export (temporary storage repository that is used to copy and move images between data centers and RHV instances), and ISO. Although specifically mentioned, most SAS and FCoE devices are presented to the kernel in the same way and are therefore utilized by device-mapper-multipath. RHV uses device-mapper-multipath to gather available disk information so most SAS and FCoE devices are supported for use with RHV provided that the standard FC option is selected when looking for available storage in RHV.
The data storage domain is the only one required by each data center and exclusive to a single data center. Export and ISO domains are optional, but require NFS.
Storage domains are shared resources and can be implemented using NFS, iSCSI or the Fibre Channel Protocol (FCP). On NFS, all virtual disks, templates, and snapshots are simple files. On SAN (iSCSI/FCP), block devices are aggregated into a logical entity called a Volume Group (VG). This is done using the Logical Volume Manager (LVM) and presents high performance I/O.
|
=AU64
|
Yes
NEW
Citrix Hypervisor is leading the way in the virtual delivery of 3D professional graphics applications and workstations. Its offerings include GPU Pass-through (for NVIDIA, AMD and Intel GPUs) and hardware-based GPU sharing with NVIDIA GRID™ vGPU™, AMD MxGPU™, and Intel GVT-g™.
Citrix Hypervisor 8 customers can use AMD MxGPU on 64-bit versions of Windows 7, Windows 10, and Windows Server 2012 R2, Winodws Server 2016 and Windows Server 2019 VMs as well HVM Linux guests.
Details: https://tinyurl.com/y4k7jm98
|
|
|
|
Memory |
|
|
Dynamic / Over-Commit
Details
|
Yes (KSM)
Kernel SamePage Merging (KSM) reduces references to memory pages from multiple identical pages to a single page reference. This helps with optimization for memory density.
|
=AU90
|
Yes (DMC)
XenServer 5.6 introduced Dynamic Memory Control (DMC) that enables dynamic reallocation of memory between VMs. This capability is maintained in later editions.
Dynamic Memory Control (sometimes known as 'dynamic memory optimization', 'memory overcommit' or 'memory ballooning') works by automatically adjusting the memory of running VMs, keeping the amount of memory allocated to each VM between specified minimum and maximum memory values, guaranteeing performance and permitting greater density of VMs per server. Without DMC, when a server is full, starting further VMs will fail with 'out of memory' errors: to reduce the existing VM memory allocation and make room for more VMs you must edit each VMs memory allocation and then reboot the VM. With DMC enabled, even when the server is full, XenServer will attempt to reclaim memory by automatically reducing the current memory allocation of running VMs within their defined memory ranges.
|
|
Memory Page Sharing
Details
|
Yes
RHV-3.0 introduced a new feature (maintained with 3.5) called transparent huge pages, where the Linux kernel dynamically creates large memory pages (2MB versus 4KB) for virtual machines, improving performance for VMs that require them (newer OSs generations tend to benefit less from larger pages)
|
=AU91
|
No
Citrix Hypervisor does not feature any transparent page sharing algorithm.
|
|
|
Yes
RHV supports Intel EPT and AMD-RVI
|
=AU92
|
No
There is no support for large memory pages in Citrix Hypervisor
|
|
HW Memory Translation
Details
|
Yes
The Red Hat Virtualization Manager interface allows you to import and export virtual machines (and templates) stored in Open Virtual Machine Format (OVF).
This feature can be used in multiple ways:
- Moving virtual resources between Red Hat Virtualization environments.
- Move virtual machines and templates between data centers in a single Red Hat Virtualization environment.
- Backing up virtual machines and templates.
|
=AU93
|
Yes
Yes, XenServer supports Intel EPT and AMD-V, see https://tinyurl.com/y6bg4xxk
|
|
|
|
Interoperability |
|
|
|
Comprehensive
RHV takes advantage of the native hardware certification of the Redhat Enterprise Linux OS. The RHV Hypervisor (RHV-H) is certified for use with all hardware which has passed Red Hat Enterprise Linux certification except where noted in the Requirements chapter of the installation guide http://red.ht/1hOZEDA
|
=AU94
|
Yes, incl. vApp
XenServer 6 introduced the ability to create multi-VM and boot sequenced virtual appliances (vApps) that integrate with Integrated Site Recovery and High Availability. vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard. There is full support for VM disk and OVF appliance imports directly from XenCenter with the ability to change VM parameters (virtual processor, virtual memory, virtual interfaces, and target storage repository) with the Import wizard. Full OVF import support for XenServer, XenConvert and VMware.
|
|
|
Limited
NEW
RHV supports the most common server and technical workstation OSs, current support includes:
- Red Hat Enterprise Linux 4.
- Red Hat Enterprise Linux 5.
- Red Hat Enterprise Linux 6.
- Red Hat Enterprise Linux 7.
- Microsoft Windows Server 2003 32-Bit x86.
- Microsoft Windows Server 2003 64-Bit x86,
- Microsoft Windows Server 2003 R2 32-Bit x86.
- Microsoft Windows Server 2003 R2 64-Bit x86.
- Microsoft Windows Server 2008 32-Bit x86.
- Microsoft Windows Server 2008 64-Bit x86.
- Microsoft Windows Server 2008 R2 64-Bit x86.
- Microsoft Windows Server 2012 64-Bit x86.
- Microsoft Windows XP 32-Bit x86.
- Microsoft Windows 7 32-Bit x86.
- Microsoft Windows 7 64-Bit x86.
- Microsoft Windows 8 32-Bit x86.
- Microsoft Windows 8 64-Bit x86.
- SUSE Linux Enterprise Server 10 32-Bit x86.
- SUSE Linux Enterprise Server 10 64-Bit x86.
- SUSE Linux Enterprise Server 11 32-Bit x86.
- SUSE Linux Enterprise Server 11 64-Bit x86.
|
=AU95
|
Improving
XenServer has an improving HCL featuring the major vendors and technologies but compared to e.g. VMware and Microsoft the list is somewhat limited - so check support first. Links to XenServer HCL and XenServer hardware verification test kits are here: http://hcl.xensource.com/
|
|
|
|
=AU96
|
Good
NEW
All major:
- Microsoft Windows 10
- Microsoft Windows Server 2019
- Ubuntu 14.14, 16.04
- SUSE Linux Enterprise Server 11 SP3, SLED 11SP3, SLES 11 SP4, SLES 12, SLED 12, SLED 12 SP1, SLED 15
- Scientific Linux 5.11, 6.6, 7.0, 7.x
- Red Hat Enterprise Linux (RHEL) 5.10, 5.11, 6.5, 6.6, 7.0, 7.x
- Oracle Enterprise Linux (OEL) 5.10, 5.11, 6.5, 6.6, 7.x
- Oracle UEK 6.5
- CentOS 5.10, 5.11, 6.5, 6.5, 7.0, 7.1, 7.3
- Debian 7.2, 8.0
Refer to the XenServer 7.5 Virtual Machine User Guide for details: http://bit.ly/2rHOmPh
|
|
Container Support
Details
|
REST API, Python CLI, Hooks, SDK
RHV exposes several interfaces for interacting with the virtualization environment. These interfaces are in addition to the user interfaces provided by the
Red Hat Virtualization Manager Administration, User, and Reports Portals. Some of the interfaces are supported only for read access or only when it has been explicitly requested by Red Hat Support.
Supported Interfaces (Read and Write Access):
- Representational State Transfer (REST) API: With the release of RHV-3 Red Hat introduced a new Representational State Transfer (REST) API. The REST API is useful for developers and administrators who aim to integrate the functionality of a Red Hat Virtualization environment with custom scripts or external applications that access the API via standard HTTP. The REST API exposed by the Red Hat Virtualization Manager is a fully supported interface for interacting with Red Hat Virtualization Manager.
- Python Software Development Kit (SDK): This SDK provides Python libraries for interacting with the REST API. The Python SDK provided by the RHVm-sdk-python package is a fully supported interface for interacting with Red Hat Virtualization Manager.
- Java Software Development Kit (SDK): This SDK provides Java libraries for interacting with the REST API. The Java SDK provided by the RHVm-sdk-java package is a fully supported interface for interacting with Red Hat Virtualization Manager.
- Linux Command Line Shell: The command line shell provided by the RHVm-cli package is a fully supported interface for interacting with the Red Hat Virtualization Manager.
- VDSM Hooks: The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on Red Hat Enterprise Linux virtualization hosts. The use of VDSM Hooks on virtualization hosts running Red Hat Virtualization Hypervisor is not currently supported.
Additional Supported Interfaces (Read Access)
Use of these interfaces for write access is not supported unless explicitly requested by Red Hat Support:
- Red Hat Virtualization Manager History Database
- Libvirt on Virtualization Hosts
Unsupported Interfaces
Direct interaction with these interfaces is not supported unless your use of them is explicitly requested by Red Hat Support:
- The vdsClient Command
- Red Hat Virtualization Hypervisor Console
- Red Hat Virtualization Manager Database
|
|
Yes
Docker support for Citrix Hypervisor is a feature of XenServer 6.5 SP1 or later and is delivered as a supplimental pack named 'xs-container' which also includes support for CoreOS and cloud-drives.
More info: https://tinyurl.com/y5jjhz65
|
|
|
REST API
RHV provides its RESTful API for external integration into cloud platforms, for example, ManageIQs cloud management interface.
|
=AU98
|
Yes (SDK, API, PowerShell)
The following Citrix Hypervisor developer documentation is available on https://developer-docs.citrix.com/
|
|
|
RHCI: CloudForms, OpenStack, RHV; OpenShift (Fee-Based Add-Ons)
NEW
Comment: Due to the variation in actual cloud requirements, different deployment model (private, public, hybrid) and use cases (IaaS, PaaS etc.) the matrix will only list the available products and capabilities. It will not list evaluations (green, amber, red) rather than providing the information that will help you to evaluate it for YOUR environment.
Overview:
IaaS (private and hybrid)
In July 2013 Red Hat announced Red Hat Cloud Infrastructure (RHCI) -a single-subscription offering that bundles and integrates the following products:
- RHV - Datacenter virtualization hypervisor and management for traditional (ENTERPRISE) workloads
- Cloud-enabled Workloads: RHEL OpenStack - scalable, fault-tolerant platform for developing a managed private or public cloud for CLOUD-ENABLED workloads
- Red Hat CloudForms - Cloud MANAGEMENT and ORCHESTRATION across multiple hypervisors and public cloud providers
Please find more details here - http://red.ht/1oKMZsP
PaaS:
Red Hat also offers OpenShift (PaaS), as on-promise technology as well as available as online (public cloud) offering by Red Hat. Details here - http://red.ht/1LRn7ol
There are a number of public and hybrid (on-premise or cloud) offerings that Red Hat positions as complementary like Red Hat Storage Server (scale-out storage servers both on-premise and in the Amazon Web Services public cloud). Details are here: http://red.ht/1ug7XTY
|
=AU99
|
CloudStack APIs, support for AWS API and OpenStack
Citrix CloudPlatform uses a RESTful CloudStack API. In addition to supporting the CloudStack API, CloudPlatform supports the Amazon Web Services (AWS) API. Future cloud API standards from bodies such as the Distributed Management Task Force (DMTF) will be implemented as they become available.
Details on the CloudPlatform API here: https://tinyurl.com/y5bqtjbk
|
|
|
Extensions
|
|
|
|
|
|
|
Cloud |
|
|
|
VDI Included in RHV; HTML5 support (Tech Preview)
There is one single SKU for RHV that includes server and technical workstation virtualization.
Red Hats Enterprise Virtualization includes an integrated connection broker as well as the ability to manage VDI users via external (LDAP-based) directory services. It is often referred to as RHV-D (technical workstation). The same interface is used to manage both server and technical workstation images (unlike most other solutions like e.g. VMware View, Citrix Xentechnical workstation).
Please note that VDI is an additional charge to the server product and cannot be purchased separately (i.e. without purchasing RHV for servers).
Red Hat Virtualization for technical workstations (RHV-D) consists of:
- Red Hat Hypervisor
- Red Hat Virtualization Manager (RHV-M) as centralized management console with management tools that administrators can use to create, monitor, and maintain their virtual technical workstations (same interface as for server management)
- SPICE (Simple Protocol for Independent Computing Environments) - remote rendering protocol. There is initial support for the SPICE-HTML5 console client is offered as a technology preview. This feature allows users to connect to a SPICE console from their browser using the SPICE-HTML5 client.
- Integrated connection broker-a web-based portal from which end-users can log into their virtual technical workstations
Note: VDI related capabilities are NOT listed as Fee-Based Add-Ons (no purchase of additional VDI management software is required or licenses involved to enable the VDI management capability).
However, you will require relevant client access licensing to run virtual machines with Windows OSs, see http://bit.ly/1cBdgAm for details
|
=AU100
|
CloudPlatform; OpenStack
Note: Due to the variation in actual cloud requirements, different deployment model (private, public, hybrid) and use cases (IaaS, PaaS etc) the matrix will only list the available products and capabilities. It will not list evaluations (green, amber, red) rather than providing the information that will help you to evaluate it for YOUR environment.
After the acquisition of Cloud.com in July 2011, Citrix has centered its cloud capabilities around its CloudPlatform suite.
Citrix CloudPlatform (latest release 4.5.1) powered by Apache CloudStack - is an open source cloud computing platform that pools computing resources to build public, private, and hybrid Infrastructure as a Service (IaaS) clouds. CloudPlatform manages the network, storage, and compute nodes that make up a cloud infrastructure. Use CloudPlatform to deploy, manage, and configure cloud computing environments.
|
|
|
|
Desktop Virtualization |
|
|
|
No, Possible via Red Hat Storage
NEW
Red Hat acquired Gluster in 2011 opening up the integration of the scale-out GlusterFS NAS filesystem for storage virtualization capabilities (now integrated in Red Hat Storage).
Red Hat Storage is a software-only, scale-out storage solution that provides flexible unstructured data storage for the enterprise.
There is support for managing Red Hat Storage volumes and bricks using Red Hat Virtualization Manager.
Red Hat Storage Console (RHS-C) of Red Hat Storage Server (RHS) for On-Premise provides replication via the native capabilities of RHS, with integration in the RHV-M interface.
RHS-C extends RHV-M 3.x and oVirt Engine technology to manage Red Hat Trusted Storage Pools with management via the Web GUI, REST API and (future) remote command shell.
Note that Red Hat Storage is a fee-based add-on.
A Getting Started with Red Hat Virtualization 3.4 and Red Hat Storage 3 guide is avaiable here:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Storage/Red_Hat_Storage-3-Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Storage-en-US.pdf
|
=AU101
|
Citrix Virtual Apps and Desktops
Citrix is by many perceived to own the most comprehensive portfolio for desktop virtualization alongside with the largest overall market share.
Citrixs success in this space is historically based on its Terminal Services-like capabilities (Hosted SHARED Desktops i.e. XenApp aka Presentation Server) but Citrix has over time added VDI (Hosted Virtual Desktops), mobility management (XenMobile), networking (NetScaler), cloud for Service Providers hosting desktop/apps (CloudPlatform) and other comprehensive capabilities to its portfolio (separate fee-based offerings).
Citrixs FlexCast approach promotes the any type of virtual desktop to any device philosophy and facilitates the use of different delivery technologies (e.g. VDI, application publishing or streaming, client hypervisor etc.) depending on the respective use case (so not a one fits all approach).
XenDesktop 7.x history:
- Citrixs announcement of Project Avalon in 2011 promised the integration of a unified desktop / application virtualization capability into its CloudPlatform product. This was then broken up into the Excalibur Project (unifying XenDesktop and XenApp in the XenDesktop 7.x product) and the Merlin Release aiming to provide multi-tenant clouds to manage virtual desktops and applications.
- XenDesktop 7.1 added support for Windows Server 2012 R2 and Windows 8.1, and new Studio configuration of server-based graphical processing units (GPUs) considered an essential hardware component for supporting virtualized delivery of 3D professional graphics applications.
- In Jan 2014 Citrix announced that XenApp is back as product name, rather than using XenDesktop to refer to VDI as well as desktop/application publishing capabilities, also see http://gtnr.it/14KYg4b
- With XenDesktop 7.5 Citrix announced the capability to provision application and or desktop workloads to public and or private cloud infrastructures (Citrix CloudPlatform, Amazon and (later) Windows Azure. Wake-on-LAN capability has been added to Remote PC Access and AppDNA is now included in the product.
-With XenDesktop 7.6 includes new features like: session prelaunch and session linger; support for unauthenticated (anonymous) users and connection leasing makes recently used applications and desktops available even when the Site database in unavailable.
VDI in a Box: Citrix also has VDI in a Box (ViaB) offering (originating in the Kaviza acquisition) - a simple to deploy, easy to manage and scale VDI solution targeting smaller deployments and limited use cases.
In reality ViaB box scales to larger (thousaunds of users) environments but has (due to its simplified nature and product positioning) restricted use cases compared to the full XenDesktop (There is no direct migration path between ViaB and XenDesktop). ViaB can for instance not provide advanced Hosted Shared Desktops (VDI only), no advanced graphics capabilities (HDX3DPro), has limited HA for fully persistent desktops, no inherent multi-site management capabilities.
Overview here: https://tinyurl.com/y6xttwm3
Recommended Read for VDI Comparison (Ruben Spruijts VDI Smackdown): http://www.pqr.com/downloadformulier?file=VDI_Smackdown.pdf
|
|
|
|
1 |
|
|
|
3rd Party
At this time RHV focuses on the management of the virtual and cloud infrastructure.
Partner solutions offer insight into the services running on top of the virtual infrastructure, with integration into RHV.
|
=AU102
|
no
There is no specific Storage Virtualization appliance capability other than the abstraction of storage resources through the hypervisor.
|
|
|
|
2 |
|
|
Application Management
Details
|
sVirt & Security Partnerships
Red Hats RHV partner ecosystem has security partnerships with SourceFire, Catbird, and other security-focused products and solutions. This is in addition to RHVs integrated sVirt capability.
|
=AU103
|
Vendor Add-On: XenDesktop
Citrix Director
The performance monitoring and trouble shooting aspect of Application Management in the context of Citrix is mostly applicable to XenDesktop 7 Director and XenDesktop 7 EdgeSight
- Desktop Director is a real-time web tool, used primarily for the Help Desk agents. In XenDesktop 7, Directors new troubleshooting dashboard provides the real-time health monitoring of your XenDesktop 7 site.
- In XenDesktop 7, EdgeSight provides two key features, performance management and network analysis.
With XenDesktop 7, Director (the real-time assessment and troubleshooting tool is included in all XenDesktop 7 editions.
The new EdgeSight features are included in both XenApp and XenDesktop Platinum editions entitlements however these features are based on the XenDesktop 7 platform. The environment must be XenDesktop 7 in order to leverage the new Director.
|
|
|
|
3 |
|
|
|
Vendor Add-On: CloudForms
NEW
Red Hat Cloudforms (separate product) or sold as part of Red Hat Cloud Infrastructure (RHCI) provides enterprises operational management tools including monitoring, chargeback, governance, and orchestration across virtual and cloud infrastructure such as Red Hat Virtualization, Amazon Web Services, Microsoft and VMware and OpenStack.
The CloudForms Management Engine enables context aware, model-driven automation and orchestration of administrative, operational, and self-service activities in enterprise cloud environments. Automation can be driven across a wide spectrum of scenarios including discovery, state changes, performance and trending, event-based, scheduled, via web-services integration or on-demand through an extensible web-based management portal.
|
=AU104
|
Vendor Add-Ons: NetScaler Gateway, App Firewall, CloudBridge, Direct Inspection API
NetScaler provides various (network) security related capabilities through e.g.
- NetScaler Gateway: secure application and data access for Citrix XenApp, Citrix XenDesktop and Citrix XenMobile)
- NetScaler AppFirewall: secures web applications, prevents inadvertent or intentional disclosure of confidential information and aids in compliance with information security regulations such as PCI-DSS. AppFirewall is available as a standalone security appliance or as a fully integrated module of the NetScaler application delivery solution and is included with Citrix NetScaler, Platinum Edition.
Details here: http://bit.ly/17ttmKk
CloudBridge:
Initially marketed under NetScaler CloudBridge, Citrix CloudBridge provides a unified platform that connects and accelerates applications, and optimizes bandwidth utilization across public cloud and private networks.
CloudBridge encrypts the connection between the enterprise premises and the cloud provider so that all data in transit is secure.
http://bit.ly/17ttSYA
|
|
|
|
4 |
|
|
Workflow / Orchestration
Details
|
Plugin - Symantec Storage Foundation (Fee-Based Add-On)
fully supports a Symantec Storage Foundation plug-in that provides disaster recovery in case of a failure.
Symantec Storage Foundation Plugin provides automated disaster recovery and site recovery functionality for RHV:
- Automate virtual machine failovers to multiple secondary sites with push button disaster recovery that includes guest, guest network, storage reconfiguration and restart.
- Enables other nodes to take predefined actions when a monitored application fails, for instance to take over and bring up applications elsewhere in the cluster.
- Supports heterogeneous array replication such as (Hitachi, EMC & Symantec).
|
=AU105
|
Workflow Studio (incl.)
Workflow Studio is included with this license and provides a graphical interface for workflow composition in order to reduce scripting. Workflow Studio allows administrators to tie technology components together via workflows. Workflow Studio is built on top of Windows® PowerShell and Windows Workflow Foundation. It natively supports Citrix products including XenApp, XenDesktop, XenServer and NetScaler.
Available as component of XenDesktop suite, Workflow Studio was retired in XenDesktop 7.x
|
|
|
|
5 |
|
|
|
Vendor Add-On: CloudForms
NEW
RHV includes enterprise reporting capabilities through a comprehensive management history database, which any reporting application utilizes to generate a range of reports at data center, cluster and host levels.
For charge-back, RHV has -party integrated solutions like e.g. IBMs Tivoli Usage and Accounting Manager (TUAM) which can convert the metrics RHVs enterprise reports provide into fiscal chargeback numbers.
Red Hat Cloudforms fee add-on or sold as part of Red Hat Cloud Infrastructure (RHCI) provides enterprises operational management tools including monitoring, chargeback, governance, and orchestration across virtual and cloud infrastructure such as Red Hat Virtualization, Amazon Web Services, Microsoft and VMware and OpenStack and provides the capability for cost allocation with usage and chargeback by determining who is using which resources to allocate costs, create and implement chargeback models.
|
=AU106
|
Integrated Site Recovery (incl.)
XenServer 6 introduced the Integrated Site Recovery (maintained in 7.x), utilizing the native remote data replication between physical storage arrays and automates the recovery and failback capabilities. The new approach removes the Windows VM requirement for the StorageLink Gateway components and it works now with any iSCSI or Hardware HBA storage repository. You can perform failover, failback and test failover. You can configure and operate it using the Disaster Recovery option in XenCenter. Please note however that Site Recovery does NOT interact with the Storage array, so you will have to e.g. manually break mirror relationships before failing over. You will need to ensure that the virtual disks as well as the pool meta data (containing all the configuration data required to recreate your vims and vApps) are correctly replicated to your secondary site.
|
|
|
|
6 |
|
|
|
Vendor Add-Ons: Load Balancer, High Performance
- Red Hat has networking related products like the Load-Balancer Add-On for RHEL (http://www.redhat.com/f/pdf/rhel/RHEL6_Add-ons_datasheet.pdf) and the RHEL High-Performance Network Add-On (delivers remote direct memory access - RDMA- over Converged Ethernet - RoCE) that can add value to virtualization and cloud environments.
- Common SDN solution integration is available with Neutron integration on 3.3 (tech preview)
- Cisco UCS (VM-FEX) integration
|
=AU107
|
Vendor Add-On: CloudPortal Business Manager, CloudStack Usage Server (Fee-Based Add-Ons)
The workload balancing engine (WLB) was introdcued within XenServer 5.6 FP1 introduced support for simple chargeback and reporting. The chargeback report includes, amongst other data, the following: the name of the VM, and uptime as well as usage for storage, CPU, memory and network reads/writes. You can use the Chargeback Utilization Analysis report to determine what percentage of a resource (such as a physical server) a specific department within your organization used.
see http://bit.ly/2sK6siq
|
|
|
|
7 |
|
|
Network Extensions
Details
|
NSX (Vendor Add-on)
VMware NSX is the network virtualization platform for the Software-Defined Data Center.
http://www.vmware.com/products/nsx.html
NSX embeds networking and security functionality that is typically handled in hardware directly into the hypervisor. The NSX network virtualization platform fundamentally transforms the data center’s network operational model like server virtualization did 10 years ago, and is helping thousands of customers realize the full potential of an SDDC.
With NSX, you can reproduce in software your entire networking environment. NSX provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.
|
=AU108
|
NSX (Vendor Add-on)
VMware NSX is the network virtualization platform for the Software-Defined Data Center.
http://www.vmware.com/products/nsx.html
NSX embeds networking and security functionality that is typically handled in hardware directly into the hypervisor. The NSX network virtualization platform fundamentally transforms the data center’s network operational model like server virtualization did 10 years ago, and is helping thousands of customers realize the full potential of an SDDC.
With NSX, you can reproduce in software your entire networking environment. NSX provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.
|
|