Addon
|
|
|
|
|
|
|
custom |
|
|
Unique Feature 1
|
Add-On not supported by this product
|
Add-On not supported by this product
|
Add-On not supported by this product
|
|
|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
|
|
|
|
Cons
|
|
|
|
|
|
|
Content |
|
|
|
Content created by Virtualizationmatrix; (Contributors: Sean Cohen, Yaniv Dary, Raissa Tona, Larry W. Bailey)
Content created by Virtualizationmatrix
Thanks to Yaniv Dary, Raissa Tona, Sean Cohen, and Larry W. Bailey for content contribution and review.
|
THE VIRTUALISTS
Content created by THE VIRTUALISTS: http://www.thevirtualist.org/
|
WhatMatrix
Content created by WhatMatrix and modified by Romain Serre
|
|
|
|
Assessment |
|
|
|
RHEV 3.6 - Click Here For Details
NEW
Red Hat Enterprise Virtualization is Red Hats server and workstation virtualization platform. It consists of the smart management product RHEV-M (Red Hat Enterprise Virtualization Manager) that can manage both RHEV-H (Red Hat Enterprise Virtualization Hypervisor), a purpose built image for easy management, and RHEL-H (Red Hat Enterprise Linux Hypervisor). Both hosts types contain KVM based virtualization capabilities. All listed features apply to both RHEV-H and RHEL host - unless stated otherwise.
RHEV is a complete virtualization management solution for virtualized servers and workstations that aims to provide performance advantages, competitive pricing and a trusted, stable environment. It is built to work best with Linux mission critical and high proformance workloads, including SAP, on x86 and Power. It can also run Windows guests and is Microsoft SVVP (Server Virtualization Validation Program) certified. RHEV is co-engineered with Red Hat Enterprise Linux and inherits its characteristics of reliability, performance, security and scalability
RHEV is derived from oVirt, the community open virtualization management project and is a strategic virtualization alternative to proprietary virtualization platforms. Red Hat Enterprise Virtualization is co-engineered with Linux and OpenStack for a smooth transition into Private and Public clouds.
With Red Hat Enterprise Virtualization, you can:
-Take advantage of existing people skills and investment.
-Decrease TCO and accelerate ROI.
-Automate time-consuming and complicated manual tasks.
- Standardize storage, infrastructure, and networking services on OpenStack (tech-preview) .
|
XenServer 6.5 is available in Standard Edition and Enterprise Edition
On 13th January 2015 Citrix released XenServer 6.5, offering a 64bit Dom0 and significant networking and disk performance increase.
In May 2015, XenServer 6.5.0 Service Pack 1 came with new features, see http://support.citrix.com/article/CTX142355
-Enhanced guest support
-Intel GVT-d Pass-through support for Windows VMs
-NVIDIA GPU Pass-through support for Linux VMs
-Ability to manage and monitor Dockerâ„¢ containers using XenCenter
-Improved user experience for read caching. Customers can now see the status of read caching in XenCenter
Citrix announced in April 2013 that XenServer will be made available open source, as a Linux Foundation Collaborative Project, see http://xenserver.org/.
What is the difference between XenServer and the open-source Xen Project Hypervisor?
The Xen Project hypervisor is used by XenServer. In addition to the open-source Xen Project hypervisor, Citrix XenServer includes:
Control domain (dom0)
XenCenter - A Windows client for VM management
VM Templates for installing popular operating systems as VMs
Enterprise level support
Citrix entered the Hypervisor market with the acquisition of XenSource - the main supporter of the open source Xen project - in Oct 2007. The Xen project continues to exist, see http://www.xen.org/.
Citrix created a suite of commercial products that are named after the open source project including XenApp (aka Presentation Server), XenDesktop (Citrixs hosted virtual desktop/VDI solution) and XenServer (the only product actually based on the open source Xen hypervisor code.
XenServer uses paravirtualization and hardware-assisted virtualization, requiring either a modified guest OSs or hardware assisted CPUs (more commonly seen as its less restrictive and hardware assisted CPUs like Intel VT/AMD-V have become standard). The device drivers are provided through a Linux-based guest (CentOS) running in a control virtual machine (Dom0).
|
WS 2019 Â - Click Here For Overview
General Overview: Windows Server 2019 was released in Oct 2018 with several (less obvious but important) updates to its virtualization and cloud capabilities. Windows Server 2019 provides major improvements for Hyper-V, Storage Spaces Direct and Network Controller. These features are the fundation of Azure Stack, the Microsoft private cloud offer. Windows Server 2016 comes in 3 editions, Datacenter, Standard, Essential but only Datacenter and Standard enable virtualization capabilities (we will therefore not list the other editions). There is also the free Hyper-V Server edition. Datacenter and Standard provide the equivalent set of capabilities and differ only in their virtualization rights i.e. how many Operating System Environments (OSEs = licensed guests) are included. Datacenter is aimed at: Highly virtualized private and hybrid cloud environments.As in Windows Server 2016, Windows Server 2019 offers Full Server and Server Core installation options. The minimal nature of Server Core creates limitations: There is no Windows shell and very limited GUI functionality. The Server Core interface is a command prompt with PowerShell support. The matrix will focus on the capabilities of the Full Server installation, pointing out limitations where appropriate.
Windows Server 2019 provides a minimalist deployment model called Nano Server. Nano Server doesnt provide Shell, or 32bit legacy support. Moreover Nano Server supports only Cloud Native Application such as Kubernetes or Docker. For further details, please visit this topic:Â https://technet.microsoft.com/en-us/windows-server-docs/get-started/getting-started-with-nano-server
Just before the release of Windows Server 2019, Microsoft delivered Windows Admin Center which is a web-based tool to manage your Windows Server infrastructure. It provides tools to manage hyperconverged infrastructure, Windows Server (event log, firewall etc.) and it supports third party extension to add features. It can be a replacement of MMC.
Microsoft created a new offer called Azure Stack HCI which basically Windows Server 2019 hyperconverged infrastructure managed by Windows Admin Center.
|
|
|
RHEV 3.6 released in March 2016
NEW
RHEV 3.6 is the 9th major release of Red Hats enterprise virtualization management software based on the KVM hypervisor.
In 2008 Red Hat acquired Qumranet, a technology startup that began the development of KVM. Red Hats first release of RHEV was v2.1 in 2009. The v3.0 release in 2012 was a major milestone in porting the RHEV-M manager from .NET to Java (and fully open-sourcing). RHEV 3.1 removed all requirements of any Windows-based infrastructure, but still support Microsoft Active Directory and guests. Since RHEV 3.2, Red Hat has provided many feature enhancements, improvements in scale, enhanced reliability and integration points to other Red Hat offerings based on the cutting edge KVM developement.
Pervious releases:
- RHEV 3.5 - Feb 2015
- RHEV 3.4 - June 2014
- RHEV 3.3 - January 2014
- RHEV 3.2 - June 2013
- RHEV 3.1 - Dec 2012
- RHEV 3.0: Jan 2012
- RHEV 2.2: Aug 2010
- RHEV 2.1 - Nov 2009
|
6.5 Release Date January 2015 (Xen - 2003, Citrix XenServer 2007, 5.6SP2 March 2011, v6: Sep 2011, 6.1: Sep 2012, 6.2 June 2013)
Xen first public release was in 2003, became part of the Novell SUSE 10 OS in 2005 (later also Red hat). In Oct 2007 Citrix acquired XenSource (the main maintainer of the Xen code) and released XenServer under the Citrix brand. Version 5.6 was released 05/2010, 5.6 SP2 released May 2011, XenServer 6 in Sep 2011 and XenServer 6.1 in Sep 2012, XenServer 6.2 in June 2013.
|
Release Dates - W2016 October 2019 (initial Hyper-V: Jun 08)
NEW
Windows Server 2019 and System Center 2019 were released in Oct 2018. Windows Server 2016 and System Center 2016 were released in October 2016. Windows Server 2012 R2 in concunction with System Center 2012 R2 were released in Oct 2013.Windows Server 2012 released in Sep 2012, System Center 2012: April 2012 but SP1 was required to manage Server 2012 environments (release Jan 2013); The initial Hyper-V code was released as beta version with the GA version of Windows Server 2008 in early 2008, it GA-ed (through Windows Updates) in June followed by the release the management app System Center Virtual Machine Manager (SCVMM) in Sept and the standalone Hyper-V Server version in October 2008. The R2 release of Hyper-V became available in August 2009. SP1 was finalised in Feb 2011 (after a public beta in H2/2010)
In addition to System Center, now Windows Server is followed by Windows Admin Center which is a new web-based management tool for Windows Server.
|
|
|
|
Pricing |
|
|
|
Included in hypervisor subscription
The RHEV-M management component is included in the RHEV subscription model (i.e. single part number for both, hypervisor and management).
|
Open Source (free) or two commercial editions, pricing to be announced at GA: Annual: $500 / socket, Perpetual: $1250 / socket (incl. SA and support)
For commercial editions (Standard or Enterprise)XenServer is licensed on a per-CPU socket basis. For a pool to be considered licensed, all XenServer hosts in the pool must be licensed. XenServer only counts populated CPU sockets.
Customers who have purchased XenApp or XenDesktop continue to have an entitlement to XenServer.
In XenServer 6.5, customers should allocate product licenses using a Citrix License Server, as with other Citrix components. From version 6.2.0 onwards, XenServer (other than via the XenDesktop licenses) is licensed on a per-socket basis. Allocation of licenses is managed centrally and enforced by a standalone Citrix License Server, physical or virtual, in the environment. After applying a per-socket license, XenServer will display as Citrix XenServer Per-Socket Edition.
For more info: http://bit.ly/1MuvfLO
Pricing:
Annual - $500 per socket (License and Software Maintenance: SA and support)
Perpetual - $1250 per socket (License and Software Maintenance: SA and support)
|
Standard: $750 for 16 physical cores.
License are per core basis. The license covers up to 16 physical cores (8 packs of two cores). If you have more than 16 physical cores, you need to buy additional license packs. A license pack covers up to two physical cores. If you plan to deploy Nano Server, you need Software Assurance.
Licensing details: http://download.microsoft.com/download/A/2/F/A2F54CF2-FD52-4722-8C9A-728617C2AF17/Hybrid_Cloud_Windows_Server_2016_Licensing_Datasheet_EN-GB.pdf
|
|
|
Yes, combined RHEL and RHEV offering 26% savings
Red Hat offers Red Hat Enterprise Linux with Smart Virtualization (a combined solution of Red Hat Enterprise Linux and Red Hat Enterprise Virtualization) that offers a 26% savings over buying each product separately. Red Hat Enterprise Linux with Smart Virtualization is the ideal platform for virtualized Linux workloads. Red Hat Enterprise Linux with Smart Virtualization enables organization to virtualize mission-critical applications while delivering unparalleled performance, scalability, and security features. See details here: http://red.ht/1Tzr9pq
|
Free (XenCenter)
Citrix XenCenter is the Windows-native graphical user interface for managing Citrix XenServer. It is included for no additional cost (open source as well as commercial versions).
|
System Center 2019 Datacenter: $3,607 or Standard $1,323 per managed endpoint (up to 16 physical cores). Windows Admin Center - NEW (Free), Microsoft Azure
NEW
The System Center 2016 Datacenter license model is like Windows Server 2019. A single licence covers 16 physical cores on a server. If the server has more CPU, you need to buy extra licences packs. The System Center 2019 standard is based on managed endpoint.
For virtualize environment, you just have to buy licenses for the Hyper-V hosts.
Windows Admin Center is a new web-based management tool for Windows Server. It can genuinely replace some MMC console such as firewall, explorer, device manager and so on. Windows Admin Center is built on extensions that add some new capability to the product. Third party vendors can develop extensions for Windows Admin Center.
|
|
Bundle/Kit Pricing
Details
|
RHEV: No included (RHEL: 1/4/unlimited)
Customers can buy the Red Hat Enterprise Linux with Smart Virtualization which includes both RHEV and unlimited RHEL guests for use as the guest operating System. http://red.ht/1Tzr9pq
RHEV stand alone subscriptions include the RHEV-H hypervisor and RHEV-Manager, they do not include the rights to use RHEL as the guest operating system in the virtual machines being managed by RHEV.
The customer would purchase this separately by buying a RHEL for Virtual Datacenter subscription.
Please note that RHEL hosts generate additional subscription costs that are not included with RHEV (see https://www.redhat.com/apps/store/server/ for details). RHEL hosts are priced by sockets (2, 4), number of virtual guests included (1, 4, unlimited) and subscription levels (Standard/Premium).
Or,
|
No
no bundles or kits are documented
|
Windows Server 2019 standard
Windows Server 2019 Datacenter contains all features you need in a single box. Windows Server 2019 Datacenter allows you to run unlimited VMs, the software-defined storage and network layer. Windows Admin Center is free. With this single license, you can run a full Software-Defined Datacenter solution
|
|
Guest OS Licensing
Details
|
Yes (RHEV-M)
RHEV-M - the Red Hat Enterprise Virtualization Manager with a web-driven UI is central management console. It is based entirely on Open Source Software, with no dependencies on proprietary server infrastructures or web browsers. A centralized management system with a search-driven graphical interface supporting up to hundreds of hosts and thousands of virtual machines. The fully featured enterprise management system enables customers to centrally manage their entire virtual environments, which include virtual datacenters, clusters, hosts, guest virtual servers and desktops, networking, and storage.
RHEV-M is also localized in various languages including: French, German, Japanese, Simplified Chinese, Spanish and English.
|
No
A demo Linux VM is include, there are no guest OS licenses included with the XenServer license. Guest OSs need to be therefore licensed separately.
|
yes - up to two virtual OSEs with Standard (Windows Server vms)
This Standard license allows for 2VMs on this host. In place of the licensed version (i.e. Server 2019) , you can run prior (older) versions or lower editions in any of the OSEs of the licensed server. If you have Windows Server 2019 Datacenter edition you will have the right to run the bits of any prior version or lower edition (Datacenter, Standard or Essentials). If you have Windows Server 2019 Standard edition, you will have the right to run the bits of any prior version of Standard or Essentials edition.
The WS2019 standard edition enables to run two OSEs. If you need more OSEs, you have to buy additional licenses.
|
|
|
VM Mobility and HA
|
|
|
|
|
|
|
VM Mobility |
|
|
Live Migration of VMs
Details
|
Yes
Each cluster is configured with a minimal CPU type that all hosts in that cluster must support (you specify the CPU type in the RHEV-M GUI when creating the cluster). Guests running on hosts within the cluster all run on this CPU type, ensuring that every guest can be live migrated to any host within the cluster. This cannot be changed after creation without significant disruption. All hosts in a cluster must run the same CPU type (Intel or AMD).
|
Yes XenMotion (1)
XenMotion enables live migration of virtual machines across XenServer hosts without perceived downtime. XenServer supports only one migration at a time (i.e. sequential execution if multiple are started)
|
Yes (Unlimited Concurrent, Shared Nothing; new compression & SMB3 options)
(No major change in WS2019) WS 2012R2 made the following enhancements to live migration:1) Improved performance (in addition to the TCP/IP option):- Compression: This is now the default setting since Server 2012 R2 (content of the virtual machine that is being migrated is compressed and then copied to the destination server over TCP/IP)- Live Migration over SMB 3.0 (memory content of the vm is copied to the destination server over a SMB 3.0 connection): SMB Direct is used when the network adapters on the source and destination servers have Remote Direct Memory Access (RDMA) capabilities enabled; SMB Multichannel automatically detects and uses multiple connections when a proper SMB Multichannel configuration is identified.2) Cross-version migrations: You can live migrate virtual machines from Hyper-V Windows Server 2012 R2 to Hyper-V in Windows Server 2016. With WS 2012 the following enhancements had been made to the live migration capabilities: 1) Faster and concurrent migrations: Unlimited concurrent limitations using up to 10Gbit bandwidth. Comment: Unlimited refers to no hard coded limitation but in reality the amount of transferred data (amount of virtual memory / rate of changes ) as well as the available bandwidth limit the optimal number of concurrent transfers. The allowable number of concurrent live migrations can be configured manually. Attempted concurrent live migrations in excess of the limit will be queued.2) Live Migration outwidth a cluster: Cluster Shared Volumes (CSV) are required to migrate virtual machines within a Hyper-V cluster. In Server 2012 virtual machines can now be stored on SMB3 (CIFS) shares. This allows you to perform a live migration between non-clustered servers running Hyper-V (or between different clusters), while the virtual machines storage remains on the central SMB share.3) Shared Nothing Migrations: You can also perform a live migration of a virtual machine between two non-clustered servers running Hyper-V when you are only using local storage for the virtual machine. In this case, the virtual machines storage is mirrored to the destination server over the network, and then the virtual machine is migrated, while it continues to run and provide network services.
|
|
Migration Compatibility
Details
|
Yes
|
Yes (Heterogeneous Pools)
XenServer 5.6 introduced Heterogeneous Pools which enables live migration across different CPU types of the same vendor (requires AMD Extended Migration or Intel Flex Migration), details here: http://bit.ly/1ADu7Py.
This capability is maintained in XenServer 6.x
|
Yes (Processor Compatibility)
Windows Server 2008 R2 Hyper-V introduced a new capability called “processor compatibility mode for live migration, which enabled live migrations across hosts with differing CPU architectures. This capability is maintained in Server 2019.
|
|
|
Yes - Built-in (CPU\Memory) and plugable scheduler
NEW
A policy engine determines the specific host on which a virtual machine runs. The policy engine decides which server will host the next virtual machine based on whether load balancing criteria have been defined, and which policy is being used for that cluster. RHEV-M will use live migration to move virtual machines around the cluster as required.
A scheduler handles virtual machine placement, allowing users to create new scheduling policies, and also write their own logic in Python and include it in a policy.
- The scheduler serves scheduling requests for running or migrating virtual machines according to a policy.
- The scheduling policy also includes load balancing functionality.
- Scheduling is performed by applying hard constraints and soft constraints to get the optimal host for that request at a given point of time
- The infrastructure allowing users to extend the new scheduler, is based on a service called ovirt-scheduler-proxy. The services purpose is for RHEV admins to extend the scheduling process with custom python filters, weight functions and load balancing modules.
- Every cluster has a scheduling policy. Administrators can create their own policies or use the built-in policies which were extended to support new capabilities such as shutting down servers for power saving policy.
The load balancing process runs once every minute for each cluster in a data center. You can disable automatic migration for individual vm or pin them to specific hosts.
You can choose to set the policy as either even distribution or power saving, but NOT both.
|
Yes
Enabled through XenCenter
|
Yes
Maintenance mode for a virtual machine allows you to halt a host anytime you need to perform maintenance tasks on the physical host, such as applying security updates or replacing hardware on the physical host computer. If you have not Virtual Machine Manager, you can put the node in pause to trigger the draining mode which move all resources associated to the host. You can trigger the maintenance mode from Failover Cluster MMC, VMM and Windows Admin Center.
The advantage of using VMM is that the maintenance mode is also set in Operations Manager to avoid to raise alerts.
|
|
Automated Live Migration
Details
|
Yes
When Power saving is enabled in a cluster it distributes the load in a way that consolidates virtual machines on a subset of available hosts. This enables surplus hosts that are not in use to be powered down, saving power. You can set the threasholds in the RHEV-M GUI to specify the Minimum Service Level a host is permitted to have.
You must also specify the time interval in minutes that a host is permitted to run below the minimum service level before remaining virtual machines are migrated to other hosts in the cluster - as long as the maximum service level set also permits this.
|
Yes Workload Balancing
Workload Balancing has been re-introduced within XenServer 6.5 Enterprise Edition: Workload Balancing (WLB) automates the process of moving Virtual Machines between hosts to evenly spread Network, CPU, and Disk loads to maximize throughput. WLB keeps a history of the usage of CPU, Disk, and Network for all VMs in the pool so it can predict where workloads can be best located.
XenServer v6.2.0 included additional monitoring features for customers who wish to capture performance and throughput data. Additional 3rd party products which support workload balancing functionality are available from vendors such as VMTurbo, Lanamark, CA Technologies, Goliath, and eG Innovations.
Details here: http://bit.ly/1DLF6Y9
|
Yes - Dynamic Optimization (CPU, mem, disk I/O, Net I/O) and VM Load Balancing
If you don’t have System Center, a Windows Server 2016 feature called VM Load Balancing enables to balance workloads accross cluster node. You can configure the frequency and the aggressiveness. If the node is in pause or not reachable, the VM Load Balancing doesnt try to live-migrate virtual machines on this node.
A great improvement in SC 2012 is the departure from the cumbersome automated live migration capability in Server 2008R2 and VMM 2008R2 that relied on the Performance and Resource Optimization (PRO) function. With System Center 2012, Dynamic Optimization replaces this host load balancing and is performed independently of PRO or SCOM. This is also supported for vSphere and XenServer environments that support live migration.During Dynamic Optimization, VMM migrates virtual machines within a host cluster to improve load balancing among hosts. Dynamic Optimization is typically configured on a host group level, to migrate vims with a specified frequency and aggressiveness. By default, virtual machines are migrated every 10 minutes with medium aggressiveness. If a host group contains stand-alone hosts, host clusters that do not support live migration, or host in maintenance mode, Dynamic Optimization is not performed on those hosts. If a host cluster contains virtual machines that are not highly available, those virtual machines are not migrated during Dynamic Optimization.On demand Dynamic Optimization also is available by using the Optimize Hosts action in the VMs and Services workspace. After Dynamic Optimization is requested for a host cluster, VMM lists the virtual machines that will be migrated for the administrators approval.You can specify Dynamic Optimization settings for the following resources: CPU, memory, disk I/O, and network I/O.
If Dynamic Optimization is enabled, the VM Load-Balancing is automatically disabled in the cluster
|
|
|
Yes
NEW
Storage Live Migration is supported and allows migration of virtual machine disks to different storage devices while the virtual machine is running. There is also an option to move an entire storage domain between datacenters or even between setups.
|
Yes Workload Balancing
Power management is part of XenServer Workload Balancing (WLB).
Background: Power Management introduced with 5.6 was able to automatically consolidate workloads on the lowest possible number of physical servers and power off unused hosts when their capacity is not needed. Hosts would be automatically powered back on when needed.
|
Yes - Power Optimization
(No major change in WS2019)Another new feature in VMM 2012 - Power Optimization. Power Optimization is an (optional) sub-feature of Dynamic Optimization (only available when a host group is configured for Dynamic Optimization). Through Power Optimization, VMM helps to save energy by turning off hosts that are not needed to meet resource requirements within a host cluster and turns the hosts back on when they are needed again. Please note: Power Optimization ensures that the cluster maintains a quorum if an active node fails and therefore requires a certain numbers of hosts to remain running in a cluster. See details here: http://bit.ly/TbDCmZ
|
|
Storage Migration
Details
|
200 hosts/cluster
That is the supported maximum number of hosts per RHEV datacenter and also per cluster (the theoretical KVM limit is higher).
|
Yes (Storage XenMotion)
Storage XenMotion in XenServer remains unchanged from v 6.1
XenServer 6.1 introduced the long awaited (live) Storage XenMotion capability
Storage XenMotion allows storage allocation changes while VMs are running or moved from one host to another including scenarios where a) VMs are NOT located on storage shared between the hosts (shared nothing live migration) and (b) hosts are not even in the same resource pool. This enables system administrators to:
- Live migration of a VM disk across shared storage targets within a resource pool (e.g. move between LUNs when one is at capacity);
- Live migration of a VM disk from one storage type to another storage type within a resource pool (e.g. perform storage array upgrades)
- Live migration of a VM disk to or from local storage on a XenServer host within a resource pool (reduce deployment costs by using local storage)
- Rebalance or move VMs between XenServer pools (for example moving a VM from a development environment to a production environment);
Starting with XenServer 6.1, administrators initiating XenMotion and Storage XenMotion operations can specify which management interface
transfers should occur over. Through the use of multiple management interfaces, the virtual disk transfer can occur with less impact on both core XenServer operations and VM network utilization.
Citrix supports up to 3 concurrent Storage XenMotion operations. The maximum number for (non-CDROM) VDIs per VM = six. Allowed Snapshots per VM undergoing Storage XenMotion = 1.
Technical details here: http://bit.ly/1xR85sG
|
Yes (Live and Shared Nothing)
(No major change in WS2019)Server 2012 overcomes the Quick Storage Migration limitation seen with 2008 R2 (where you can initiate a Storage Migration while the vm remains running for the majority of the transfer but the virtual machine must be powered off at the end of the migration). Hyper-V in Windows Server 2012 introduces support for moving virtual machine storage while the virtual machine remains running. You can perform this task by using a new wizard in Hyper-V Manager or by using new Hyper-V cmdlets for Windows PowerShell. You can add storage to either a stand-alone computer or to a Hyper-V cluster, and then move virtual machines to the new storage while the virtual machine(s) continue to run.You can also perform a live migration of a virtual machine between two non-clustered servers running Hyper-V when you are only using local storage for the virtual machine. In this case, the virtual machines storage is mirrored to the destination server over the network, and then the virtual machine is migrated, while it continues to run and provide network services (shared nothing migration).
|
|
|
|
HA/DR |
|
|
|
Yes
High availability is an integrated feature of RHEV and allows for virtual machines to be restarted in case of a host failure.
HA has to be enabled on a virtual machine level. You can specify levels of priority for the vm (e.g. if resources are restrained only high priority vm are being restarted). Hosts that run highly available vm have to be configured for power management (to ensure accurate fencing in case of host failure).
Fencing Details: When a host becomes non-responsive it potentially retains the lock on the virtual disk images for virtual machines it is running. Attempting to start a virtual machine on a second host could cause data corruption. Fencing allows RHEV-M to safely release the lock (using a fence agent that communicates with the power management card of the host) to confirm that a problem host has truly been rebooted.
RHEV-M gives a non-responsive host a grace period of 30 seconds before any action is taken in order to allow the host to recover from any temporary errors.
Note: The RHEV-M manager needs to be running for HA to function (unlike e.g. VMware HA or Hyper-V HA that do not rely on vCenter / VMM for the failover capability), also HA can not be enabled on the cluster level.
|
16 hosts / resource pool
16 Hosts per Resource Pool.
Please see XenServer 6.5 Configuration Limits document http://bit.ly/17eNuo7
|
64 nodes / 8000 vims
(No major change in WS2019)Hyper-V in Windows Server 2012 significantly improves the scale of cluster sizes. It now supports running up to 8,000 virtual machines on a 64-node failover cluster (2008 R2 supported a maximum of 16 cluster nodes and 1,000 virtual machines per cluster).In Windows 2012 Server Core you can configure clustering with the SCONFIG cmd utility.In Server 2012 when using the iSCSI Software Target only up to 5 cluster hosts were supported on GA of the product. Â http://technet.microsoft.com/en-us/library/gg509003(v=ws.10).aspx
|
|
Integrated HA (Restart vm)
Details
|
Yes (HA, WatchDog)
RHEV supports watchdog device for linux guests that restarts virtual machines in case of OS failure. High availability (in addition to monitoring physical hosts) also monitors all virtual machines, so if the virtual machines operating system crashes, a signal is sent to automatically restart the virtual machine, but this is with host change.
|
Yes
XenServer High Availability protects the resource pool against host failures by restarting virtual machine. HA allows for configuration of restart priority and failover capacity. Configuration of HA is simple (effort similar to enabling VMware HA).
|
Yes (NIC and unmanaged Storage failure detection - Virtual Machine Compute Resiliency)
(No major change in WS2019))Virtual machine availability requires Microsoft Failover Clustering to be configured - while this can still be considered added complexity (compared to e.g. VMwares integrated  HA) the clustering in Server 2012/R2 has been greatly enhanced, simplified and integrated with VMM:- You can not only add Hyper-V clusters to VMM but also CREATE and MODIFY Hyper-V clusters from within VMM- Availability options for virtual machines on Hyper-V clusters can now be configured using the VMM console, without having to open Failover Cluster Manager (SP1). This includes the ability to configure virtual machine failover prioritization and affinity / anti-affinity rules.- Ability to deploy the VMM server itself as highly available virtual machine- Support for guest clustering (cluster Windows instances in virtual machines) via Fibre Channel (new virtual fibre channel adapter function)There is no restart priority setting for virtual machines or capacity check (if failover capacity is available)New in WS2012 R2 is the ability to detect physical storage failures on storage devices that are not managed by Windows Failover Clustering (SMB 3.0 file shares). Storage failure detection can detect the failure of a virtual machine boot disk or any additional data disks associated with the virtual machine (ensuring that the virtual machine is relocated and restarted on another node - eliminating situations where unmanaged storage failures would not be detected).Also Hyper-V and Windows Failover Clustering are enhanced to detect network connectivity issues for virtual machines. If the physical network assigned to the virtual machine suffers a failure (such as a faulty switch port or network adapter, or a disconnected network cable), the Windows Failover Cluster will move the virtual machine to another node in the cluster to restore network connectivity.
In Windows Server 2016, VM can be protected against network and storage transient issues. If a node has network transient issues, the VM state changes to unmonitored. If the node has for 5mn this kind of issues, the node is placed in quarantine and the VM is moved.
If the cluster has storage issue, the VM state changes to Critial - Paused state. When the storage is recovered, the VM start again from the state before the storage issue.
For more information you can read the following topic: https://blogs.msdn.microsoft.com/clustering/2015/06/03/virtual-machine-compute-resiliency-in-windows-server-2016/
|
|
Automatic VM Reset
Details
|
No
There is no live lock-step mirroring support in RHEV - although the theoretical capability is available in KVM. Red Hat tends to points out that the limitations around this technology (inability to take e.g. snap shots, perform a live storage migrate, limited guest vCPU support, high bandwidth/processing requirements) can make it inappropriate for enterprise implementation.
|
No
There is no automated restart/reset of individual virtual machines e.g. to protect against OS failure
|
Yes (Heartbeat)
(No major updates with 2019)Cluster Failover Manager in Server 2012 has an option to enable heartbeat monitoring for individual vms (Enable heartbeat monitoring for the virtual machine under vm properties). If the heartbeats stop, indicating that virtual machine has become unresponsive, the cluster is notified, and can attempt to restart the clustered virtual machine or fail it over. In Server 2012 R2 this function has been enhanced to also monitor specific services and applications within virtual machines.
|
|
VM Lockstep Protection
Details
|
No (native);
Yes (with Vendor Add-On: Red Hat Cluster Suite)
There is no integrated application level monitoring or restart of services/vm in case of application failures. RHEV supports watchdogs and HA.
This is possible using Red Hat Cluster Suite. This is a Fee-based Add-On.
|
No
While XenServer can perform VM restart in case of a host failure there is no integrated mechanism to provide zero downtime failover functionality.
|
No
(No major change in WS2019)While MS failover clustering can provide virtual machine restart there is no integrated mechanism to provide seamless stateful (no downtime) failover in case of host failure.
|
|
Application/Service HA
Details
|
No - See Details
There is no natively provided Site Failover capability in RHEV. Red Hat does provide the tools needed to provide a disaster recovery solution.
This is possible via 3rd party partners integration (such as Veritas, Acronis, SEP, Commvault).
|
No
There is no application monitoring/restart capability provided with XenServer
|
Yes (VM Compute Resiliency)
(No major update in Windows Server 2019)Windows Server 2016 brought a new feature called VM Compute Resiliency. When an host in the cluster has some network disconnection, it is first isolated. When the host is isolated, the node continues to host the VM role in unmonitored state. If the network issues are still presents after 5mn, the VM is moved to another host. If there is a storage issue, the VM state is Pause-Critical until the storage is again available. For further information, you can read this topic: https://blogs.msdn.microsoft.com/clustering/2015/06/03/virtual-machine-compute-resiliency-in-windows-server-2016/
|
|
Replication / Site Failover
Details
|
Yes
NEW
You are able to update both RHEV-H or RHEL-H via the management UI. The management sends events on updates pending on the hosts and the manager machine.
Updates can be also managed via Red Hat Satellite.http://red.ht/1Oxs20B
|
Integrated Disaster Recovery (no storage array control)
XenServer 6 introduced the new Integrated Site Recovery (maintained in 6.5), replacing the StorageLink Gateway Site Recovery used in previous versions. It utilizes the native remote data replication between physical storage arrays and automates the recovery and failback capabilities. The new approach removes the Windows VM requirement for the StorageLink Gateway components and it works now with any iSCSI or Hardware HBA storage repository (rather only the restricted storage options with StorageLink support). You can perform failover, failback and test failover. You can configure and operate it using the Disaster Recovery option in XenCenter. Please note however that Site Recovery does NOT interact with the Storage array, so you will have to e.g. manually break mirror relationships before failing over. You will need to ensure that the virtual disks as well as the pool meta data (containing all the configuration data required to recreate your vims and vApps) are correctly replicated to your secondary site.
|
Hyper-V Replica - Storage Replica
(No major change in WS2019)Disaster recovery capability has been enhanced through the introduction of Hyper-V Replica - an asynchronous replication of virtual machines over a network link from one Hyper-V host at a primary site to another Hyper-V host at a replica site. In the event of failure at the primary site, administrators can manually fail over production virtual machines to the Hyper-V server at the recovery site.
During failover, virtual machines are brought back to a consistent point in time, and they can be accessed by the rest of the network. Minimum interval of replication is 5 minutes (so there is the usual data loss associated with asynchronous replication) .
With WS 2012R2 you can also configure extended replication where the Replica server forwards information about changes that occur on the primary virtual machines to a third server for additional protection. In addition the frequency of replication, which previously was a fixed value, is now configurable. You can also access recovery points for 24 hours. Previous versions had access to recovery points for only 15 hours.
Hyper-V Replica Pros:
- Affordable DR solution (included in Server 2012, no extra licenses required)
- Does not require expensive SAN based replication
- Test Failover without impact on production vims
Cons:
- Manual failover and recovery actions
- Aimed at smaller scale deployments
- no integration with SAN array replication (like vSphere SRM), no synchronous replication
You can of course use (independent of Hyper-V replica) manual array based replication mechanisms to replicate virtual machines on LUN level between sides. This would either be a pretty manual process or requires (fee-based) third party solution with varying levels of integration and complexity.
With Windows Server 2016, you can now replicate at the level block your CSV volume from one cluster to another, from one server to another or between volumes. Thanks to Storage Replica, you can replicate your volumes from one room to another to ensure the Disaster Recovery. When a disaster occurs, you just have to remove the replication link and the volume in the second room is available in read/write. You just have to start your VM
|
|
|
Management
|
|
|
|
|
|
|
General |
|
|
Central Management
Details
|
Third-party plug-in framework
RHEV focuses on managing the virtual infrastructure and can also manage Red Hat Gluster Storage nodes.
Also RHEV-M integrates with 3rd party applications including:
- BMC connector for RHEV-M REST API to collect data for managing RHEV boxes without having to install an agent.
- HP OneView for Red Hat Enterprise Virtualization (OVRHEV) UI plug-in that that allows you to seamlessly manage your HP ProLiant infrastructure from within RHEV Manager and provides actionable, valuable insight on underlying HP hardware (HP Insight Control plug-in is also available).
- Veritas Storage Foundation that delivers storage Quality of Service (QoS) at the application level and maximizes your storage efficiency, availability and performance across operating systems. This includes Veritas Cluster Server provides automated disaster recovery functionality to keep applications up and running. Cluster Server enables application specific fail-over and significantly reduces recovery time by eliminating the need to restart applications in case of a failure.
- Tenable Network Securitys Nessus Audit for RHEV-M which queries the RHEV API and reports that information within a Nessus report.
- Ansible RHEV module that allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the RHEV platform.
|
Yes (XenCenter), SCVMM (see details)
XenCenter is the central Windows-based multi-server management application (client) for XenServer hosts (including the open source version).
It is different to other management apps (e.g. SCVMM or vCenter) which typically have a management server/management client architecture where the management server holds centralized configuration information (typically in a 3rd party DB). Unlike these management consoles, XenCenter distributes management data across XenServer servers (hosts) in a resource pool to ensure there is no single point of management failure. If a management server (host) should fail, any other server in the pool can take over the management role. XenCenter is essentially only the client.
License administration is done using a separate web interface.
XenServer 6 introduced the ability to manage XenServer hosts and VMs with System Center Virtual Machine Manager (SCVMM) 2012. System Center Operations Manager (SCOM) 2012 will also be able to manage and monitor XenServer hosts and VMs. System Center integration is available with a special supplemental pack from Citrix.
Citrix announced with v 6.2 that several features will not be further developed and will be removed in a later release. These deprecated features function as in XenServer 6.1.0 and will remain supported, providing a period of overlap while third-party products or alternative solutions are established.
WLB has returned to XenServer 6.5 (Enterprise Edition) having been retired with the launch of XenServer 6.2. The deprecation notice for DVSC has also been rescinded within XenServer 6.5.
This initially included: Microsoft System Center Virtual Machine Manager (SCVMM) support
However, Citrix confirmed: Citrix is not deprecating support for SCVMM. However, SCVMM 2012 R2 does not currently support XS 6.2 (this being a Microsoft decision), because XS 6.2 was not released in time for this to happen).
|
Yes (System Center 2019 / VMM and Windows Admin Center)Â
NEW
System Center 2019 is the new and improved management suite that provides central management of (heterogeneous) Datacenter resources, including private and public cloud, physical and virtual server and client devices. The suite includes a wide range of components that are now covered by a single license: Operations Manager, Configuration Manager, Data Protection Manager, Service Manager, Virtual Machine Manager, Endpoint Protection and Orchestrator. Virtual Machine Manager still provides the core function of virtualization management but System Center provides now better integration with the other components that allow for wider (public) cloud management, process automation and self service alongside application management and provisioning. Virtual Machine Manager (VMM) enables you to configure and manage your virtualization host, networking, and storage resources in order to create and deploy virtual machines and services to private clouds that you have created.
For two years, Microsoft works on a new free product called Windows Admin Center. It can replace MMC. It provides features to manage Hyper-V, Storage Spaces Direct and cluster. Its a light tool that doesnt require a database and can be installed on a laptop or a console server.
|
|
Virtual and Physical
Details
|
Yes
NEW
RHEV offers the choice to integrate with many LDAP servers (Microsoft Active Directory, Red Hat Directory Server, Red Hat Enterprise IPA, OpenLDAP, iPlanet Directory Server and more) with support for simple or Kerberos based authentication, centrally managed identity, single sign-on services, high availability directory services.
RHEV also provides complete solution for users/groups management using PostgreSQL database as a backend, which can be used in RHEV the same way users/groups from LDAP.
RHEV provides a range of pre-configured or default roles, from the Superuser or system administration of the platform, to an end user with permissions to access a single virtual machine only. Additional roles can be added and customized to suit the end user environment.
|
No
XenCenter focusses on managing the virtual infrastructure (XenServer host and the associated virtual machines).
|
Yes (Complete SC mgmt, Fabric Updates, Storage Management)
Basically all components of the System Center 2019 management suite provide comprehensive management of virtual and physical environments from the same interface(es). The simplified licensing (that includes all System Center components) allows to easily expand from physical device management into virtual and private, as well as public cloud management without having to purchase additional components. A few of the many examples are updates/patches through Configuration Manager as well as the new ability to perform Fabric Updates and Storage Array Management through VMM (SMI-S/SMP based)
|
|
RBAC / AD-Integration
Details
|
No (native)
Yes (with Vendor Add-On: CloudForms)
No - RHEV exclusively manages Red Hat based environments.
With Red Hat CloudForms users can manage multiple hypervisor vendors and reduce training costs to switch over to RHEV. Details here: http://red.ht/I8JG3E (additional cost, not included in RHEV subscription)
|
Yes (hosts/XenCenter)
XenServer 5.6 introduced Role Based Access Control by allowing the mapping of a user (or a group of users) to defined roles (a named set of permissions), which in turn have the ability to perform certain operations. RBAC depends on Active Directory for authentication services. Specifically, XenServer keeps a list of authorized users based on Active Directory user and group accounts. As a result, you must join the pool to the domain and add Active Directory accounts before you can assign roles.
There are 6 default roles: Pool Admin, Pool Operator, VM Power Admin, VM Admin, VM Operator and Read Only - which can be listed and modified using the xe CLI.
Details here: http://bit.ly/1E2HvQ7
|
Yes (SCVMM, Windows Admin Center)
NEW
With System Center 2019 you can now grant permissions to users on a per cloud basis. This eliminates the need to create a new user role for every combination of action/user/cloud. With System Center 2019 you can create Run As accounts to provide the necessary credentials for performing operations in VMM and use the new capabilities available to the Delegated Administrator and Self-Service User roles to give users the ability to perform tasks.
Windows Admin Center provides the ability to authenticate against the Active Directory domain or from Azure Active Directory. However the RBAC are currently not implemented (July 2019).
|
|
Cross-Vendor Mgmt
Details
|
Yes (RHEV-M, Power User Portal)
Yes, RHEV-M is Java based and is accessed through a web browser GUI, RESTful API with session support, Linux CLI, Python SDK, Java SDK.
RHEV also offers a Power User Portal, a web-based access portal for user (Red Hat positions it as an entry-level Infrastructure as a Service (IaaS) user portal). It allows users to: Create, edit and remove virtual machines, Manage virtual disks and network interfaces, Assign user permissions to virtual machines, Create and use templates to rapidly deploy virtual machines, Monitor resource usage and high-severity events, Create and use snapshots to restore virtual machines to a previous state.
|
No (native)
Yes (Vendor Add-On: CloudPlatform)
XenCenter only manages Citrix XenServer hosts.
Comments:
- Citrixs Desktop Virtualization product (XenDesktop, fee based add-on) supports multiple hypervisors (ESX, XenServer, Hyper-V)
- Citrixs CloudPlatform supports multiple hypervisors including Citrix XenServer, Xen, KVM, VMware vSphere and Oracle VM (multiple hypervisors within a single cloud)
- The Citrix XenServer Conversion Manager in 6.1 now enables batch import of VMs created with VMware products into a XenServer pool to reduce costs of converting to a XenServer environment).
|
Improved (VMware) & removed (Citrix)
System Center 2019 includes management capabilities for heterogeneous environments. Components like Operations Manager will continue to provide integration with Sun Solaris and various Linux and Unix distributions. SC Orchestrator now integrates toolsets from e.g. VMware, IBM and HP etc. into automated workflows. But also the virtualization and cloud management has been enhanced to manage different hypervisors with support for (alongside Hyper-V) VMware vSphere and Citrix XenServer. With VMM 2016, the support of XenServer has been removed and only ESXi 5.5 and further are supported. Comment: Multi-hypervisor management is still a point of contention in the industry. SC2019 has greatly improved the management scope but there still is (and arguably always will be) the inability to replicate 100% of the native vendor features and keep up with constantly changing product releases resulting in the need for the native vendor tool. This primarily applies to virtualization management while support for heterogeneous hypervisor environments under unified cloud management is a generally accepted scenario.
|
|
Browser Based Mgmt
Details
|
Yes (extended functionality with CloudForms - Fee-Based Add On)
RHEV has comprehensive data warehouse with a stable API and BI reports package that provides a suite of pre-configured reports that enable you to monitor and analyse the system at data center, cluster and host levels.
It also provides dashboards in the UI to monitor the system in these different levels.
Red Hat Enterprise 3.6 includes a deeper integration with Red Hat Satellite that allows the querying of errata information for the RHEV-Manager’s operating system and provides a complete view into critical updates for the infrastructure lifecycle management. The release also includes the ability to modify the health status of Host, Storage Domain, or Virtual Machine objects based on external factors such as hardware failure or OS monitoring alerts. Users can quickly perform an impact analysis of their environment in the event an object beyond RHEV’s normal visibility is at risk of failure.
CloudForms offers cloud and virtualization operations management advance capabilities.
Features of the cloud and virtualization operations management capabilities:
- delivering IaaS with self-service
- service catalogs, automated provisioning and life cycle management
- monitoring and optimization of infrastructure resources and workloads
- metering, resource quotas, and chargeback
- proactive management, advanced decision support, and intelligent automation through predictive analytics
- provides visibility and reporting for governance, compliance, and management insight
- Enforces enterprise policies in real-time, ensuring cloud security, reliability, and availability
- IT process, task, and event automation.
Note that CloudForms is an additional Fee-Based offering not covered by the RHEV subscription.
Details here: www.redhat.com/en/technologies/cloud-computing/cloudforms
|
No (Web Self Service: retired)
Web Self Service (retired with the launch of XenServer 6.2) was a lightweight portal which allowed individuals to operate their own virtual machines without having administrator credentials to the XenServer host. For large infrastructures, Citrix CloudPlatform is a full orchestration product with far greater capability; for a lightweight alternative, xvpsource.org offers a free open source product.
Related Citrix products have browser based access, examples are Storefront (next generation of Web Interface) and Citrixs (separate) CloudPortal product line that contains CloudPortal Service Manager, a web portal for onboarding, provisioning and customer self-service http://www.citrix.com/English/ps2/products/feature.asp?contentID=2316906
|
Windows Admin Center
NEW
Windows Admin Center is a web-based management tool that enables to handle your Hyper-V hosts and VM, Windows Server and hyperconverged infrastructure based on Storage Spaces Direct. Windows Admin Center is browseable from Edge or Google Chrome. This tool is completely free and you can download it to try it
|
|
Adv. Operation Management
Details
|
Yes
NEW
Yes, live migration is fully supported with unlimited concurrent migrations (depending only on available resources on other hosts and network speed). RHEV 3.6 adds abilities to use compression and auto-convergence to complete migration of heavier workloads faster. By default limited to 3 concurrent outgoing migrations and each live migration event is limited to a maximum transfer speed of 32 MiBps.
|
No
There is no advanced operations management tool included with XenServer.
Additional Info:
XenServers Integration Suite Supplemental Pack allows inter-operation with Microsofts System Center Virtual Machine Manager 2012 (SCVMM) and Systems Center Operations Manager (SCOM). SCOM enables monitoring of host performance when installed on a XenServer host.
Both of these tools can be integrated with your XenServer pool by installing the Integration Suite Supplemental Pack on each of your XenServer hosts.
For virtual desktop environments Citrix EdgeSight is a performance and availability management solution for XenDesktop, XenApp and endpoint systems. EdgeSight monitors applications, devices, sessions, license usage, and the network in real time, allowing users to quickly analyze, resolve, and proactively prevent problems.
System Center and EdgeSight are separate products not included with the XenServer license.
|
Yes (SC Operations Manager, Microsoft Azure)
SC 2019 Operations Manager enables you to monitor services, devices, and operations from a single console. Numerous views show the state, health, and performance information, as well as alerts generated for availability, performance, configuration and security situations. Specifically for virtualization and cloud aspects you can connect System Center 2019 Virtual Machine Manager (VMM) with Operations Manager to monitor the health and availability of the virtual machines and virtual machine hosts that VMM manages. You can also monitor health and availability of the VMM management server, the VMM database server, library servers and see diagram views of the virtualized environment through the Operations console in Operations Manager.A close integration between System Center 2019 Virtual Machine Manager and System Center 2019 Operations Manager introduces System Center cloud monitoring of virtual layers for private cloud environments. To get this new functionality, use the System Center 2019 Management Pack for System Center 2016 Virtual Machine Manager Dashboard, which is imported automatically when you integrate Operations Manager and Virtual Machine Manager.With System Center 2019 management packs are also updated with new metrics for chargeback purposes that are based both on allocation and utilization. This provides better integration with chargeback and reporting, and enables monitoring of tenant-based utilization of resources that allows chargeback and billing.
If you dont have System Center Suite, you can leverage Azure to provide some hybrid features such as Azure Update Management, Azure Automation or Azure Monitor. Azure Update Management provides the ability to handle updates for your Windows Infrastructure. It provides also an inventory and a change tracking systyem. Azure Automation can help you to automate some tasks through PowerShell script and finally Azure Monitor can monitor, aggregates log and notify you.
Windows Admin Center is the bridge between the On-Prem and Azure world.
|
|
|
|
Updates and Backup |
|
|
Hypervisor Upgrades
Details
|
Yes (Red Hat Network)
NEW
Updates to the virtual machines are typically performed as in the physical environment. For Red Hat virtual machines updates can be downloaded from the Red Hat Network. For Windows virtual machines you would apply the relevant MS update mechanisms. There is no specific integrated function in RHEV-M to update virtual machines or templates.
Centralized patching mechanism for Red Hat machines is possible via Satellite. RHEV also shows errata information on updates for RHEL hosts and guests OS.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
The XenCenter released with XenServer 6.5 allows updates to be applied to all versions of XenServer (commercial and free)
With XenServer 6.5 patching and updating via the XenCenter management console (enabling automated GUI driven patch application and upgrades) is supported with the comercial and free XenServer versions.
XenServer 6 introduced the Rolling Pool Upgrade Wizard. The Wizard simplifies upgrades to XenServer 6.x by performing pre-checks with a step-by-step process that blocks unsupported upgrades. You can choose between automated or semi-automated, where automated can use a network based XenServer update repository (which you have to create) while semi-automated requires local install media. There are still manual steps required for both approaches and no scheduling functionality or online repository is integrated.
|
Yes (Cluster Aware Updates, Fabric Updates; Azure Update Management - NEW)
NEW
VMM 2012 (maintained and expanded with R2 and 2016) finally introduced integration between update services and virtualization hosts (even additional Fabric Servers like library servers, PXE servers, the WSUS server itself and the VMM management server).
VMM supports on demand (marketing speak for manual) compliance scanning and remediation of the fabric servers using compiled Baselines (group of patches/updates).
The scope for VMM update management has been expanded with VMM 2012 R2. You can add servers such as Active Directory, DNS, DHCP and other management servers that are not VMM host servers, as managed computers. You can then use a Windows Server Update Services (WSUS) server to manage updates for these infrastructure servers in the same way that you do for other computers in the VMM environment.
VMM 2012 also supports Cluster Aware Updates (CAU) orchestrated updates - when remediations are performed on a host cluster, VMM places one cluster node at a time in maintenance mode and then installs updates. If the cluster supports live migration, intelligent placement is used to migrate virtual machines off the cluster node. If the cluster does not support live migration, VMM saves state for the virtual machines.
This feature requires a Windows Server Update Service (WSUS) server to be associated with VMM. After you add a WSUS server to VMM, you should not manage the WSUS using the WSUS console.
This feature is a big improvements but during my testing I found the manual nature of updating the baseline still a little cumbersome.
A new feature in Microsoft Azure called Update Management enables to centralize the patch management for On-Prem and Azure machines (VM and physical server). It’s a free feature until 500mn of automation in a month. Update Management is just an orchestrator and provides order to Windows Update service. So the 500mn are almost never reached only with Update Management.
|
|
|
Yes; Including RAM
Live VM snapshot with or without memory and live removal of snapshots is supported.
|
No
There is no integrated update engine for guest OS of the virtual machines
|
Yes (WSUS, SCCM, VMST, Azure Update Management - NEW)
NEW
(no major updates with WS 2019) A new feature in Microsoft Azure called Update Management enables to centralize the patch management for On-Prem and Azure machines (VM and physical server). It’s a free feature until 500mn of automation in a month. Update Management is just an orchestrator and provides order to Windows Update service. So the 500mn are almost never reached only with Update Management.
Virtual machine patching is typically done using the existing Windows update methods like Windows Server Update Services/System Center Configuration Manager.
To keep your offline virtual machines, templates and VHDs up-to-date with the latest OS and application updates you can use the Virtual Machine Servicing Tool 2012 (VMST).
- Install the tool (a configuration wizard guides you through the process of connecting the tool to VMM and WSUS).
- Configure virtual machines, templates, and virtual hard disks groups.
- Create and schedule servicing jobs (specifies which virtual machines, templates, and virtual hard disks to update, what resources to use for the update process, and when to start the servicing job).
|
|
|
Yes
There is a API set for third-party tools that offer backup, restore, and replication.
|
yes
You can take regular snapshots, application consistent point-in-time snapshots (requires Microsoft VSS) and snapshots with memory of a running or suspended VM. All snapshot activities (take/delete/revert) can be done while VM is online.
|
Yes (Checkpoint, Production Checkpoint)
(No major change in WS2019)Snapshots - with WS2012 R2 also referred to as Checkpoints- can be taken, deleted and reverted to with or without SCVMM while the vm is online. The Hyper-V Live Merge feature allows to merge checkpoint back into the virtual machine while it continues to run (checkpoints include vRAM when taken online).
New in R2 is the ability to export a virtual machine checkpoint while the vm is running.
Before Windows Server 2016, checkpoints were based on the save state and not all applications support this state (ex: AD or SQL). In Windows Server 2016, a new checkpoint process called production checkpoint exists. They are based on backup technology: VSS. Regardless the hosted application, the production checkpoint can be run on the VM. Its work also on Linux VM.
|
|
Backup Integration API
Details
|
No
There is no natively provided backup capability in RHEV. Red Hat does provide the tools needed to provide a backup solution. This is possible via 3rd party partners integration (such as Veritas, Acronis, SEP, Commvault).
|
Limited
There is no specific backup framework for integration of 3rd party backup solutions as such however the XenServer API allows for scripting (e.g. utilizing XenServer snapshots) and basic integration with 3rd party backup products and scripting (e.g. utilizing XenServer snapshots) e.g. PHD backup
|
Yes (VSS API)
The Volume Shadow Copy Service (VSS) is a set of COM interfaces that implements a framework to allow volume backups to be performed while applications on a system continue to write to the volumes. The Windows SDK can be used to develop VSS applications.
|
|
Integrated Backup
Details
|
No (native);
Yes (with Vendor Add-On: Satellite 6)
RHEV-H or RHEL hosts can be installed using traditional methods either interactively (from ISO, USB flash media) or automated (PXE). There is however no integrated capability to deploy RHEV centrally to bare metal hosts using the RHEV management.
This is possible using Satellite 6 using Foreman. RHEV allows bare metal provisioning via Satellite in a single UI.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
No (Retired)
Citrix retired the Virtual Machine Protection and Recovery (VMPR) in XenServer 6.2. VMPR was the method of backing up snapshots as Virtual Appliances.
Alternative backup products are available from Quadric Software, SEP, and PHD Virtual
Background:
With XenServer 6, VM Protection and Recovery (VMPR) became available for Advanced, Enterprise and Platinum Edition customers.
XenServer 5.6 SP1 introduced a basic XenCenter-based backup and restore facility, the VM Protection and Recovery (VMPR) which provides a simple backup and restore utility for your
critical vims. Regular scheduled snapshots are taken automatically and can be used to restore VMs in case of disaster. Scheduled snapshots can also be automatically archived to a remote CIFS or NFS share, providing an additional level of security.
Additionally the XenServer API allows for scripting of snapshots. You can also (manually or script) export and import virtual machines for backup purposes.
|
Yes (WSB & DPM incl. Linux VMs), Azure Backup (New Fee Based Offering)
(No major change in WS2019)There are various approaches to back up virtual machines under Hyper-V, whether you use your existing backup applications as if the vm was a physical server (agent in vm) or one of the many host level tools (MS or 3rd party) that back up to local devices or cloud repositories.
In Oct 2013 Microsoft announced the availability of the Windows Azure Backup Service for backup of private cloud files to MSs public cloud.
Here are the MS options:
1) Free Windows Server Backup (in conjunction with Volume Shadow Copy Service VSS) can be used to backup and restore vms (including running virtual machines on a standalone host - without the registry edit that was required to register the VSS Hyper-V writer. WSB in 2012/2012 R2 can also backup running virtual machines on a Cluster Shared Volume (CSV) - not possible prior to 2012 - but can not backup vms on SMB3 shares.
2) Data Protection Manager (DPM) as part of System Center 2012 SP1 (!) supports: standalone or clustered Hyper-V hosts (both CSV and failover cluster are supported), protecting virtual machine that uses SMB storage, protecting Hyper-V with VM Mobility. Online backups for guests running Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, and Windows Server 2003 are supported
DPM 2012 R2 also added the capability to perform online backups of Linux virtual machines.
3) Windows Azure Backup Service - Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials to the cloud.
Improvements in Server 2012 enable incremental backup to be independently enabled on each virtual machine through backup software of your choice. Windows Server 2012 uses “recovery snapshots to track the differences between backups. These are similar to regular virtual machine snapshots, but they are managed directly by Hyper-V software. During each incremental backup, only the differences are backed up.
4) Windows Server 2016 has a new system backup based on Change Block Tracking. To backup Windows Server 2016, it is necessary to implement DPM 2016.
|
|
|
|
Deployment |
|
|
Automated Host Deployments
Details
|
Yes
RHEV allows creation and management of templates. RHEV also supports integration with a Glance image provider used in a OpenStack enviroment.
|
No
There is no integrated host deployment mechanism in XenServer - manual local or network repository based deployments are required.
|
Yes (bare metal through VMM), incl. Storage Spaces Direct
(No major change in WS2019)Virtual Machine Manager (VMM) 2012 introduced a feature that allows you to discover physical computers on the network, automatically install a supported operating systems (Windows Server 2008 R2, R2 + SP1 or Windows Server 2012) using PXE boot, and then automatically convert the computers into managed Hyper-V hosts. The physical computer can be without OS (bare-metal) or a computer where you want to overwrite an existing operating system.
With Virtual Machine Manager in System Center 2012 R2 you can also provision physical computers as file servers, then create a Scale-Out File Server cluster that consists of these computers. You can then manage and monitor these clusters in VMM. To create a Scale-Out File Server cluster, you need to use a physical computer profile (new term with R2 - replacing host profile) that is configured with the Windows File Server role.
In VMM 2016, you can provision hyperconverged cluster (based on Storage Spaces Direct)
The deployment requires the following:
- PXE server (Windows Deployment Services) available on your network
- The physical host must have a baseboard management controller (BMC) installed that enables out-of-band management (enables remote power management), needs to be enabled for PXE boot and DNS entries prepared
- A generalized (sysprepped) OS image and any required driver files added to the VMM library
- A Physical Computer Profile (library) - the Phyical Computer Profile includes the location of the operating system image and other hardware and operating system configuration settings
When deploying a host the physical computers will boot from the Windows PE image on the PXE server(configuring the hardware where required, downloads the OS image (.vhdx file) together with the specified driver files, applies the drivers to the operating system image, enables the Hyper-V server role, and then restarts the computer.
Note: To create a Scale-Out File Server cluster (As of System Center 2012 R2 Virtual Machine Manager only) run the Create Clustered File Server Wizard to discover the physical computers, to configure settings such as the cluster name, provisioning type, and discovery scope, and to start the Scale-Out File Server cluster deployment.
For detail on deploying physical computers with VMM 2012 R2 see here: http://bit.ly/17RYzVQ
|
|
|
No (native);
Yes (with Vendor Add-On: CloudForms)
There is no integrated functionality in RHEV that allows you to deploy a multi-vm construct from a single template.
CloudForms supports tiered VM Templates and a ordering portal to deploy them.
This is a Fee-based Add-On; Details - https://www.redhat.com/en/technologies/cloud-computing/cloudforms for details.
|
Yes
Templates in XenServer are either the included pre-configured empty virtual machines which require an OS install (a shell with appropriate settings for the guest OS) or you can convert an installed (and e.g. sysprep-ed) VM into a custom template.
There is no integrated customization of the guest available, i.e. you need to sysprep manually.
You can NOT convert a template back into a VM for easy updates. You deploy a VM from template using a full copy or a fast clone using copy on write.
|
Yes (now incl. Gen 2 templates)
(no major updates with WS 2019) VMM enables automated creation of virtual machines from templates (master images that are customized on deployment). Basically a VM templates consist of three main components: a virtual disks (or multiple), a hardware profile (hardware attributes of the virtual machine) and a guest OS profile (custom Windows OS settings for the virtual machine).
VMM 2012 R2 enabled the creation of virtual machine templates that are based on Generation 2 (http://bit.ly/17S06v2) virtual machines and the ability to create Windows and Linux-based virtual machines and multi-VM Services from a gallery of templates.
While you can create a template from an existing VM template or a virtual hard disk stored in the library , typically the vm template creation involves:
- Create a vm and deploy an OS into it
- Customize OS (Hyper-V Integration Components, Windows Updates etc. ...)
- Generalize the image - run Sysprep (select the Generalize option)
- Create the template from the existing vm
During the deployment you will need to specify OS and virtual hardware settings (by creating these profiles or selecting from existing profiles).
Comment: As the image is syspreped BEFORE the template is created the update process of templates is still cumbersome. You can not quickly convert a templates to a vm, update it and convert it back to a template (as you might be used to from VMware vCenter). You have to deploy a vm from the template, update, generalize and convert it back to a template. However the VM can be updated directly from the VHDX file without run the VM template thanks to DISM. A VHDX file can also be mounted in the system to add files.
|
|
Tiered VM Templates
Details
|
Yes (limited native);
Advanced options with Vendor Add-On: Satellite
When adding host to a cluster it is automaticly configured to match storage, network and other settings in the RHEV manager. State is also monitored for changed in network and storage that can have impact on service.
More complex configuration can be done via Satellite 6 using Foreman.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
vApp
XenServer 6 introduced vApps - maintained with v 6.5
A vApp is logical group of one or more related Virtual Machines (VMs) which can be started up as a single entity in the event of a disaster.
The primary functionality of a XenServer vApp is to start the VMs contained within the vApp in a user predefined order, to allow VMs which depend upon one another to be automatically sequenced. This means that an administrator no longer has to manually sequence the startup of dependent VMs should a whole service require restarting (for instance in the case of a software update). The VMs within the vApp do not have to reside on one host and will be distributed within a pool using the normal rules.
This also means that the XenServer vApp has a more basic capability than e.g. VMwares vApp or MSs Service Templates which contain more advanced functions.
|
Yes (Service Templates), guest cluster support
(no major updates with WS 2019) While vm templates are useful for deploying individual virtual machines they have limitations (single vm only, limited guest customization etc.) when deploying more complex application structures.
In VMM 2012 a service is a set of virtual machines that are configured and deployed together and are managed as a single entity. You can use the Service Template Designer in VMM to create a service templates that define the configuration of the service. The service template includes information about the virtual machines that are deployed as part of the service, which applications to install on the virtual machines, and the networking configuration needed for the service (including the use of a load balancer).
A Service template allows for rich In-Guest customization (e.g. Windows 2008 R2 Role & Feature installation, application deployment and script execution.
After the service template is created, you can then deploy the service to a private cloud or to virtual machine hosts.
VMM 2012 R2 introduced support for scripts that create a guest cluster, allowing the script that runs on the first deployed virtual machine to be different than the script that runs on the other virtual machines in the tier.
Service Templates in 20012/2012R2 details here: http://bit.ly/107oLMh
|
|
|
No
While RHEV supports different types of storage, there is no integrated ability in RHEV that allows classification of storage (e.g. by performance or other properties) in order to enable intelligent placement of workloads onto appropriate storage classes.
|
No
There is no integrated capability to create host templates, apply or check hosts for compliance with certain setting.
|
Yes (physical computer profiles)
(No major change in SC2019)In VMM 2012 R2 host profiles are replaced by physical computer profiles. You can use physical computer profiles in the same manner that you used host profiles to provision a bare-metal computer to a Hyper-V host. In addition, you can use physical computer profiles to provision a bare-metal computer as a Windows Scale-Out File Server cluster
The physical computer profile is used to specify host settings during a bare-metal deployment. It contains configuration settings such as the location of the operating system image to use during host deployment, together with hardware and operating system configuration settings (management NIC, IP configuration, boot disk settings, drivers, domain, admin password, product key, time zone and scripts).
The physical computer profile can however not be used to check and update compliance of hosts after the initial deployment so has less functionality than e.g. VMwares Host Profiles.
In VMM 2016, we are able to deploy Storage Spaces Direct cluster from VMM. You can deploy disaggregated and hyperconverged solution on physical host from VMM 2016
|
|
|
Yes
RHEV includes quota support and service level agreement (SLA) for storage I/O bandwidth, network interfaces and CPU QoS\shares:
- Quota provides a way for the Administrator to limit the resource usage in the System. Quota provides the administrator a logic mechanism for managing resources allocation for users and groups in the Data Center. This mechanism allows the administrator to manage, share and monitor the resources in the Data Center from the engine core point of view.
- vNIC profile allows the user to limit the inbound and outbound network traffic in virtual NIC level.
- CPU profile limits the CPU usage of a virtual machine.
- Disk profile limit the bandwidth usage to allocate the bandwidth in a better way on limited connections.
- CPU shares is a user defined number that represent a relative metric for allocating CPU capacity. It defines how often a virtual machine will get a time slice of a CPU when there is no CPU idle time.
- Host network QoS can define limits on network usage on the pysical NIC.
|
No
There are is no ability to associate storage profiles to tiers of storage resources in XenServer (e.g. to facilitate automated compliance storage tiering)
|
Yes (Storage Classifications - incl. file shares)
(No major update in VMM 2019) VMM 2012 introduced the concept of Storage Classification. It is a simple mechanism to classify the capabilities of the storage managed by VMM (requires compliant SMI-S storage). Classifying storage entails assigning a meaningful classification to storage pools. For example, you may assign a classification of GOLD to a storage pool that resides on the fastest, most redundant storage array.
In VMM System Center 2012 you could create classifications for classic block storage which was managed by VMM. In VMM 2012 R2 you can also create classifications for file shares.
There is currently no automated discovery of the storage capabilities (like with VMware VASA), it is rather equivalent the user defined storage profile used with vSphere.
|
|
|
|
Other |
|
|
|
V2V, P2V
NEW
Whilst there is no integrated capability to perform physical to virtual migrations in RHEV itself, Red Hat provide p2v tools to customers to export existing physical machines to a virtual infrastructure whilst ensuring that relevant changes are made to the new guest like paravirtualization drivers.
RHEV also provides ability to use virt-v2v tool via the manager to migrate workloads from VMWare vSphere in a simple and easy wizard based flow.
RHEV provides the virt-v2v CLI tool as well, enabling you to convert and import virtual machines created on other systems such as Xen, KVM and VMware ESX.
|
No
A resource pool in XenServer is hierarchically the equivalent of a vSphere or Hyper-V cluster. There is no functionality to sub-divide resources within a pool.
|
Yes (Host Groups)
(no major updates with VMM 2019)
VMM 2012 introduces the concept of Host Groups, a hierarchical structure that allows to group hosts by its properties. Several settings and resources are assigned at the host group level, such as custom placement rules, host reserve settings for placement, dynamic optimization and power optimization settings, network resource inheritance, host group storage allocation, and custom properties. By default, child host groups inherit the settings from the parent host group.
|
|
|
User Portal
RHEVs web-based Power User Portal is positioned by Red Hat as an entry-level Infrastructure as a Service (IaaS) user portal.
It allows the user to: create, edit and remove virtual machines, manage virtual disks and network interfaces, assign user permissions to virtual machines, create and use templates to rapidly deploy virtual machines, monitor resource usage and high-severity events, create and use snapshots to restore virtual machines to a previous state. In conjunction with the quota functionality in RHEV administrators can restrict resources consumed by the users (but there is no integrated request approval or granular resource assignment based on e.g. subsets of the resources through private clouds).
|
No (XenConvert: retired), V2V: yes
XenConvert was retired in XenServer 6.2
XenConvert allowed conversion of a single physical machine to a virtual machine. The ability to do this conversion is included in the Provisioning Services (PVS) product shipped as part of XenDesktop. Alternative products support the transition of large environments and are available from PlateSpin.
Note: The XenServer Conversion Manager, for converting virtual machines (V2V), remains fully supported.
Background:
XenConvert supported the following sources: physical (online) Windows systems OVF, VHD, VMDK or XVA onto these targets: XenServer directly, vhd, OVF or PVS vdisk
|
No (use down-level VMM versions)
The Virtual Machine Manager feature that allowed you to convert existing physical computers into virtual machines through the physical-to-virtual (P2V) conversion process is no longer supported in System Center 2012 R2 Virtual Machine Manager and further
Microsoft states that it is aware of the need for P2V capabilities and essentially recommends to use a down-level version of VMM if this functionality is needed, for details see: http://bit.ly/17S2C4n
The V2V is also deprecated in VMM. The long term solution recommended by Microsoft is the use of Azure Site Recovery
|
|
Self Service Portal
Details
|
No (native);
Yes (with Vendor Add-On: CloudForms)
There is no orchestration tool/engine provided with RHEV.
Red Hat CloudForms (fee-based vendor add-on) is used to provide orchestration.
This functionality is achieved with Red Hat Cloudforms (Fee-Based Add-ON), now part of RHCI. With CloudForms, resources are automatically and optimally used via policy-based workload and resource orchestration, ensuring service availability and performance. You can simulate allocation of resources for what-if planning and continuous insights into granular workload and consumption levels to allow chargeback, showback, and proactive planning and policy creation. For details: http://red.ht/1h7DR9T.
|
No (Web Self Service: retired in XS6.2)
Web Self Service was a lightweight portal which allowed individuals to operate their own virtual machines without having administrator credentials to the XenServer host. For large infrastructures, Citrix CloudPlatform is a full orchestration product with far greater capability; for a lightweight alternative, xvpsource.org offers a free open source product.
Related Citrix products have browser based access, examples are Storefront (next generation of Web Interface) http://bit.ly/14IKAXq, and Citrixs (separate) CloudPortal product line that contains CloudPortal Service Manager, a web portal for onboarding, provisioning and customer self-service http://bit.ly/1C3i262
|
Yes (VMM console, Service Manager, Azure Pack, Azure Stack)
(No major change in WS2019)VMM 2012 introduced comprehensive private cloud capabilities (maintained and expanded with R2). Through VMM, an organization can create private clouds, manage access to the private cloud and the underlying physical resources. App Contoller has taken on the role of the Self-Service portal (for Virtual Machine Manager) in a private cloud scenario (that can also expand your view in to any public e.g. Azure Cloud resources if needed). App Controller is a lightweight web based interface with a web front-end and a backend database component.
AppController is removed of the System Center suite since System Center 2016
Please note: The VMM Self-Service Portal is no longer supported in System Center 2012 SP1. Instead MS recommend that you use the App Controller (included in System Center) as the self-service portal solution.
Self-service users can also use the VMM console instead of the VMM Self-Service Portal to perform tasks, such as deploying virtual machines and services (but beware of users using admin portals), for details on the usage see http://bit.ly/17S4BWv
If you are using Service Manager then your organization can create Service Catalogues and offer those to self-service users (allowing you to surface pre-defined offerigs/services including approval and other processes). While Service Manager based offerings can be any service it can also include e.g. requests for private cloud infrastructure (like creating a new vm or a subscription to a cloud resource). Please note that Service Manager would typically not allow you to manage the actual cloud infrastructure itself. For an example of Service Manager integration see this video: http://bit.ly/1gT26fY
There is also a specific System Center Cloud Services Process Pack - updated to support System Center 2012 SP1. Its a service solution for automating the deployment of IaaS components. The CSPP once implemented offers a self-service experience using the Service Manager self-service portal to facilitate private cloud capacity requests, including the flexibility to request additional capacity as demands increase. See details/download here: http://bit.ly/102Du6w
If you are a Service Provider offering WS 2012 based cloud resources to tenants (running Enterprise workloads) then you could also consider the Azure Pack released in October 2013 (free to MS customers). The Windows Azure Pack is a collection of Windows Azure technologies that integrate with System Center and Windows Server to help provide a self-service portal for managing services such as websites, Virtual Machines, and Service Bus; a portal for administrators to manage resource clouds, scalable web hosting and more.
Essentially the Windows Azure Pack delivers the capabilities of Windows Azure into your datacenter, enabling you to offer a self-service, multi-tenant cloud with Windows Azure-consistent experiences and services. http://download.microsoft.com/download/0/1/C/01C728DF-B1DD-4A9E-AC5A-2C565AA37730/Windows_Azure_Pack_White_Paper.pdf
You can see from above that one of the challenges Microsoft faces is the integration of multiple tool and interfaces to provide a consistent, simplified admin and user experience.
In the next year, Microsoft will release Azure Stack which is a Microsoft Azure in your datacenter. This solution will be sold in CPS format (from 4 servers at least)
|
|
Orchestration / Workflows
Details
|
sVirt, SELinux, iptables, VLANs, Port Mirroring
The RHEV Hypervisor has various security features enabled. Security Enhanced Linux (SELinux) and the iptables firewall are fully configured and on by default. SELinux and sVirt adds security policy in kernel for effective intrusion detection, isolation and containment (SELinux is essentially a set of patches to the Linux kernel and some utilities to incorporate a strong, flexible mandatory access control architecture into the major subsystems of the kernel. e.g. with SELinux you can give each qemu process a different SELinux label to prevent a compromised qemu from attacking other processes and also allows you to label the set of resources that each process can see , so that a compromised qemu can only attack its own disk images).
Advanced network security features like VLAN tagging and port mirroring are part of RHEV, but there are no additional security-specific add-ons included with RHEV (e.g. to address advanced fire-walling, edge security capabilities or Anti-Virus APIs).
|
Yes (Workflow Studio)
Workflow Studio provides a graphical interface for workflow composition in order to reduce scripting. Workflow Studio allows administrators to tie technology components together via workflows. Workflow Studio is built on top of Windows PowerShell and Windows Workflow Foundation. It natively supports Citrix products including XenApp, XenDesktop, XenServer and NetScaler.
Available as component of XenDesktop suite, Workflow Studio was retired in XenDesktop 7.x
|
Yes (SC Orchestrator, SC Service Mgr, Azure Automation)
(No major update in 2019)System Center 2012 / R2 includes the Orchestrator product (Opalis acquisition). Orchestrator provides a workflow management solution that lets you automate the creation, monitoring, and deployment of resources in your environment.
Essentially Orchestrator provides the glue between the various System Center and other infrastructure components and allows automated interaction between the components through workflow automation (runbooks). This enables you to automate any process in your environment through a drag-and-drop interface by linking activities into runbooks.
Orchestrator includes many built-in standard activities and you can expand Orchestrators functionality and ability to integrate with other Microsoft and third-party products by installing integration packs.
Integration packs for Orchestrator contain additional activities that extend the functionality of Orchestrator. Details here: http://bit.ly/Un9Quf
Updated in Orchestrator in SC 2012 R2 are the Windows Azure Integration Pack for Orchestrator in System Center 2012 SP1 and System Center 2012 R2 and the System Center Integration Pack for System Center 2012 Virtual Machine Manager.
Orchestrator also provides extensible integration to any system through the Orchestrator Integration Toolkit. You can create custom integrations that allow Orchestrator to connect to any environment.
System Center 2012 / R2 Service Manager provides an integrated platform for automating your organizations IT service management best practices, such as those found in the Microsoft Operations Framework (MOF) and Information Technology Infrastructure Library (ITIL). It provides built-in processes for incident and problem resolution, change control, and asset lifecycle management.
You will often find a direct integration between e.g. Service Manager and Orchestrator where e.g. a service request is handled / approved by a Service Manager workflow and subsequently triggers an Orchestrator runbook e.g. a user could request a virtual machine or service (service template) from the self service portal in Service Manager integrated with an Orchestrator runbook that triggers the virtual machine deployment through Virtual Machine Manager.
With Windows Sever 2016, Microsoft introduced PowerShell 5 which provides a wide range of Cmdlets to manage almost all possible configuration and management tasks in the operating system. Essentially all Windows roles and features can be managed using PowerShell.
|
|
|
Agent-based (RHEL), CIM, SNMP
It is possible to use OEM vendor supplied tools, e.g. hardware monitoring utilities / agents, provided that the RHEL-based hypervisor is used.
RHEV-H does not provide the tools and libraries that various tools depend upon. While it will not offer customizations delivered by dedicated OEM CIM providers, CIM management is available in RHEV-H (RHEV-H cannot be customized today to include third party CIM support). Red Hat uses the open source libcmpiutil as the CIM provider in RHEV-H.
RHEV-M integrates with 3rd party plug-ins that can provide systems management. For example HP OneView for Red Hat Enterprise Virtualization (OVRHEV) UI plug-in that that allows you to seamlessly manage your HP ProLiant infrastructure from within RHEV Manager and provides actionable, valuable insight on underlying HP hardware (HP Insight Control plug-in is also available).
|
Basic (NetScaler - Fee-Based Add-On)
XenServer uses netfilter/iptables firewalling. There are no specific frameworks or APIs for antivirus or firewall integration.
The fee-based NetScaler provides various (network) security related capabilities through e.g.
- NetScaler Gateway: secure application and data access for Citrix XenApp, Citrix XenDesktop and Citrix XenMobile)
- NetScaler AppFirewall: secures web applications, prevents inadvertent or intentional disclosure of confidential information and aids in compliance with information security regulations such as PCI-DSS. AppFirewall is available as a standalone security appliance or as a fully integrated module of the NetScaler application delivery solution and is included with Citrix NetScaler, Platinum Edition.
Details here: http://bit.ly/17ttmKk
|
Windows Security, Hyper-V Extensible Switch (DNSSEC, PVLANs, port ACLs, BitLocker etc., Shield VM)
(No major change in WS2019)Server 2012 (maintained and enhanced in R2 - see http://bit.ly/19L1mmW) introduced a vast number of general security improvements that will therefore secure your virtual or cloud environment including Secure Boot (prevents boot code updates without appropriate digital certificates and signatures), early Anti-Malware launch, enhanced DNS security (DNSSEC), AppLocker and encrypted cluster volumes (BitLocker) etc.
The biggest virtualization / cloud related improvement has been provided through the new security and isolation capabilities through the Hyper-V Extensible Switch.
Windows Server 2012 / R2 provides the isolation and security capabilities for multi-tenancy by offering the following new features:
- Multitenant virtual machine isolation through private virtual LANs (PVLANs).
- Protection from Address Resolution Protocol/Neighbour Discovery (ARP/ND) poisoning (also called spoofing).
- Protection against DHCP snooping and DHCP guard (DHCP Guard: drops DHCP server messages from unauthorized virtual machines pretending to be DHCP servers, Router Guard: drops Router Advertisement and Redirection messages from unauthorized virtual machines pretending to be routers)
- Isolation and metering using virtual port access control lists (ACLs).
- The ability to trunk traditional VLANs to virtual machines.
In addition to the above WS2012 and WS2012 R2s enhancements in the area of network virtualization can arguably improve network security related aspects (see Network Virtualization)
For a list of most Security and Protection related enhancements in W2 2012 and WS 2012R2 see: http://bit.ly/19L1mmW
In Windows Server 2016, Microsoft brings the Shield VM which are VM protected by a Host Guardian. THe protected VM have their virtual disk encrypted. This system can leverage TPM 2.0 chip or certificate. (https://blogs.technet.microsoft.com/windowsserver/2016/05/10/a-closer-look-at-shielded-vms-in-windows-server-2016/)
In Windows Server 2016, we are able to add virtual TPM to VM.
|
|
Systems Management
Details
|
RHEV-H or RHEL with KVM - details here
With RHEV 3.6 virtualization hosts must run version 7.2 or later of either: full Red Hat Enterprise Linux Hypervisor (RHEL-H) with KVM enabled or Red Hat Enterprise Virtualization Hypervisor (RHEV-H), a image-based purpose built hypervisor with minimized security footprint. RHEV support both x86 and power deployments from a single x86 manager.
|
Yes (API / SDKs, CIM)
XenServer includes a XML-RPC based API, providing programmatic access to the extensive set of XenServer management features and tools. The XenServer API can be called from a remote system as well as local to the XenServer host. Remote calls are generally made securely over HTTPS, using port 443.
XenServer SDK: There are five SDKs available, one for each of C, C#, Java, PowerShell, and Python. For XenServer 6.0.2 and earlier, these were provided under an open-source license (LGPL or GPL with the common linking exception). This allows use (unmodified) in both closed-and open-source applications. From XenServer 6.1 onwards the bindings are in the majority provided under a BSD license that allows modifications.
Citrix Project Kensho provided a Common Information Model (CIM) interface to the XenServer API and introduces a Web Services Management (WSMAN) interface to XenServer. Management agents can be installed and run in the Dom0 guest.
Details here: http://bit.ly/12nQl9f
|
Yes (WMI + PowerShell)
(No major change in WS2019)Hyper-V/Hyper-V server environments can utilize the comprehensive native Windows Server OS hardware instrumentation - based on WMI, Microsofts implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM) standards from the DMTF. PowerShell is typically used as the automation framework to implement the instrumentation.
Windows Server 2012 and WS 2012 R2 include Windows PowerShell cmdlets for Network Virtualization that let you build command-line tools or automated scripts for configuring, monitoring, and troubleshooting network isolation policies.
Windows Server 2016 brings PowerShell v5 with new cmdlets for everything
|
|
|
Network and Storage
|
|
|
|
|
|
|
Storage |
|
|
Supported Storage
Details
|
Yes (FC, iSCSI)
Multipathing in RHEV manager provides:
1) Redundancy (provides failover protection).
2) Improved Performance which spreads I/O operations over the paths, by default in a round-robin fashion but also supports other methods including Asynchronous Logical Unit Access (ALUA). This applies to block devices (FC, iSCSI), although the equivalent functionality can be achieved with a sufficiently robust network setup for network attached storage.
|
Yes (limited for SAS)
Dynamic multipathing support is available for Fibre Channel and iSCSI storage arrays (round robin is default balancing mode). XenServer also supports the LSI Multi-Path Proxy Driver (MPP) for the Redundant Disk Array Controller (RDAC) - by default this driver is disabled. Multipathing to SAS based SANs is not enabled, changes must typically be made to XenServer as SAS drives do not readily appear as emulated SCSI LUNs.
|
SMB3, virtual FC, SAS, SATA, iSCSI, FC, FCoE; shared vhdx, S2D
(No major change in WS2019)The R2 release of Server 2012 added several storage related features:
- Support for shared virtual hard disks, enabling clustering of virtual machines by using shared virtual hard disk (VHDX) files (can be hosted on CSVs or SMB-based Scale-Out File Server file shares).
- Hyper-V Live Migration over SMB - enables you to perform a live migration of virtual machines by using SMB 3.0 as a transport (SMB Direct and SMB Multichannel for high speed migration with low CPU utilization)
- Improved SMB bandwidth management (configure SMB bandwidth limits to control different SMB traffic types: default, live migration, and virtual machines
Server 2012 introduced a large number of storage related enhancements.
Server 2012 / R2 virtualization hosts support (almost) all commonly available storage types as attachments for virtual machines. The biggest improvement has been the addition of network attached storage allowing you to configure (cheaper/simpler) network-based storage for virtual machines using the SMB3 protocol. New file server features include transparent failover, networking improvements for better bandwidth and resiliency, support for network adapters with RDMA capability, specific performance optimizations, Volume Shadow Copy Service support and support for Windows PowerShell commands.
Another enhancement with WS 2012 was the addition of virtual fibre channel for the virtual machines. Virtual Fibre Channel lets virtual machines connect directly to Fibre Channel-based storage and presents virtual HBA ports in the guest operating system. This provides: Unmediated access to a SAN, Hardware-based I/O path to the Windows software virtual hard disk stack, Live migration, N_Port ID Virtualization (NPIV), Single Hyper-V host connected to different SANs with multiple Fibre Channel ports, Up to four virtual Fibre Channel adapters on a virtual machine, Multipath I/O (MPIO)
Details on Storage for Hyper-V here: http://bit.ly/TDeOEC
Hyper-V in Windows Server 2012 also introduced an update to the VHD format, called VHDX, that has much larger capacity and built-in resiliency.
Windows Server 2016 brings Storage Spaces Direct that enable to implement Hyperconverged or disaggregated solution. Storage Spaces Direct enables the use of direct attach storage devices inside servers to create a highly available storage solution. For more information, you can read this whitepaper: https://gallery.technet.microsoft.com/Understand-Hyper-Converged-bae286dd
Sotrage Spaces Direct requires Windows Server 2016 Datacenter edition. This feature is not availablr in Windows Server 2016 Standard
|
|
|
Yes
For shared file systems RHEV supports LVM for block storage and POSIX, NFS or GlusterFS for file storage.
|
Yes (SR)
XenServer uses the concept of Storage Repositories (disk containers/data stores). These SRs can be shared between hosts or dedicated to particular hosts. Shared storage is pooled between multiple hosts within a defined resource pool. All hosts in a single resource pool must have at least one shared SR in common. NAS, iSCSI (Software initiator and HBA are both supported), SAS, or FC are supported for shared storage.
|
Yes (MPIO and SMB Multichannel)
NEW
(No major change in WS2019)Block based MPIO can be used with Fibre Channel, iSCSI and SAS interfaces in Windows Server 2008 and Windows Server 2012/WS2012R2.
An MPIO solution can be deployed:
- By using a DSM (Device Specific Module aka plugin) provided by a 3rd party storage array manufacturer in a Fibre Channel, iSCSI, or SAS shared storage configuration.
- By using the Microsoft DSM, which is a generic DSM provided for Windows Server 2012 in a Fibre Channel, iSCSI, or SAS shared storage configuration.
Details on MPIO here: http://bit.ly/TDa7ut
For SMB based storage Server 2012 has introduces the SMB Multichannel functionality that provides scalable and resilient connections to SMB shares that dynamically create multiple connections for single sessions or multiple sessions on single connections depending on connection capabilities and demand (primarily for resiliency and load balancing).
In WIndows Server 2016, Microsoft has improved the SMB Multichannel configuration. Now each (v)NICs can be added to the same network and left without configuration. The cluster will manage the configuration automatically. This is the SImplified SMB Multichannel (https://technet.microsoft.com/en-in/windows-server-docs/failover-clustering/smb-multichannel)
|
|
Shared File System
Details
|
Yes
Booting from SAN is possible.
|
Yes (iSCSI, FC)
Yes for XenServer 6.1 and later (XenServer 5.6 SP1 added support for boot from SAN with multi-pathing support for Fibre Channel and iSCSI HBAs)
Note: Rolling Pool Upgrade should not be used with Boot from SAN environments. For more information on upgrading boot from SAN environments see Appendix B of the XenServer 6.5.0 Installation Guide: http://bit.ly/1E33XZs
|
Yes (CSV); enhanced compatibility and resilience
(No major update in W2019) Cluster Shared Volumes (CSVs) provide a general-purpose, clustered file system in Windows Server 2012 / R2, which is layered above NTFS - allowing multiple hosts access to the same data. They are not restricted to specific clustered workloads (in Windows Server 2008 R2, CSVs only supported the Hyper-V workload).
New CSV related features with WS 2012 R2 are:
- CSV ownership is now balanced across the cluster nodes, to avoid one node owning a disproportionate number of CSVs
- Increased resiliency - multiple Server service instances per failover cluster node (a default instance that handles incoming traffic from SMB clients that access regular file shares, and a second CSV instance that handles only inter-node CSV traffic)
- CSV Cache allocation: In Windows Server 2012 you could allocate only 20% of the total physical RAM to the CSV cache - you can now allocate up to 80%.
- New support for: Resilient File System (ReFS), Deduplication (e.g. for VDI), Parity storage spaces, Tiered storage spaces and Storage Spaces write-back caching
Server 2012 introduced CSV version 2 that features significantly better integration with Hyper-V and had the following enhancements:
- CSV is now a core Failover Clustering feature (no longer a separate component that needs to be enabled)
- Has been enhanced in scalability and performance to support 64 nodes
- Single Name Space (files have the same name and path when viewed from any node in the cluster)
- Extends its benefits beyond Hyper-V to support other application workloads - application storage can be served from the same share as data
- Supports Storage spaces, SMB Direct and SMB Multipathing, and create more efficient storage with thin provisioning
- Enabling CSV on a disk is now a single right click with the mouse
- CSV disks are now included in the Failover Cluster Manager Storage view easing management.
- CSV supports BitLocker encryption
- Support for the full feature set of VSS and support for both Hardware and Software Backup of CSV volumes.
Note: You cannot use a disk for a CSV that is formatted with FAT, FAT32
|
|
|
Yes
The hypervisor can be installed onto USB storage devices or solid state disks. (The initial boot/install USB device must be a separate device from the installation target).
|
No
While there are several (unofficial) approaches documented, officially no flash drives are supported as boot media for XenServer 6.x.
|
Yes (iSCSI, diskless, FC)
(no major updates with WS2019)
Booting from SAN is supported for iSCSI and FC
In Server 2012 the iSCSI Software Target capabilities were enhanced to support diskless network boot capabilities (you can use a single operating system image to start many diskless computers).
Comment: The new virtual Fibre Channel function is not intended to be a host boot option, rather than a means to provide FC attachment directly to virtual machines.
|
|
|
RAW, Qcow2
RHEV supports two storage formats: RAW and QCOW2.
In an NFS data center the Storage Pool manager (SPM) creates the virtual disk on top of a regular file system as a normal disk in preallocated (RAW) format. Where sparse allocation is chosen additional layers on the disk will be created in thinly provisioned Qcow2 (sparse) format.
For iSCSI and SAN (block), the SPM creates a Volume group (VG) on top of the Logical Unit Numbers (LUNs) provided. During the virtual disk creation, either a preallocated format (RAW) or a thinly provisioned Qcow2 (sparse) format is created.
Background:
QCOW (QEMU copy on write) decouples the physical storage layer from the virtual layer by adding a mapping between logical and physical blocks. This enables advanced features like snapshots. Creating a new snapshot creates a new copy on write layer, either a new file or logical volume, with an initial mapping that points all logical blocks to the offsets in the backing file or volume. When writing to a QCOW2 volume, the relevant block is read from the backing volume, modified with the new information and written into the new snapshot QCOW2 volume. T hen the map is updated to point to the new place.
Benefits QCOW2 offers over using RAW representation include:
- Copy-on-write support (volume only represents changes to a disk image).
- Snapshot support (volume can represent multiple snapshots of the images history).
The RAW storage format has a performance advantage over QCOW2 as no formatting is applied to images stored in the RAW format (reading and writing images stored in RAW requires no additional mapping or reformatting work on the host or manager. When the guest file system writes to a given offset in its virtual disk, the I/O will be written to the same offset on the backing file or logical volume. Note: Raw format requires that the entire space of the defined image be preallocated (unless using externally managed thin provisioned LUNs from a storage array).
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (Qcow2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-IO intensive virtual machines.
|
vhd, raw disk (LUN)
XenServer supports file based vhd (NAS, local), block device-based vhd (FC, SAS, iSCSI) using a Logical Volume Manager on the Storage Repository or a full LUN (raw disk)
|
No
(no major updates with WS2019)
Only Hyper-V Server (free version) is documented to supported boot from USB/Flash storage. Microsoft stated already with 2008R2: Although Microsoft Hyper-V Server 2008 R2 is built with components of Windows Server 2008 R2, some changes have been made specifically to Microsoft Hyper-V Server 2008 R2 in order to support boot from USB.
The documentation for setting up Hyper-V Server 2012 for boot from USB are here: http://technet.microsoft.com/en-us/library/jj733589.aspx
Nano Server doesnt support the USB boot. Only SD Card boot is supported.
|
|
Virtual Disk Format
Details
|
Default max virtual disk size is 8TB (but its configurable in RHEV DB)
The default maximum supported virtual disk size is 8TB in RHEV (but its configurable in RHEV DB).
With virtio-scsi support, Red Hat also supports now 16384 logical units per target.
File level disk size remains unlimited by the hypervisor, the limits of the underlying filesystem do however apply.
|
2TB
For XenServer 6.5 the maximum virtual disk sizes are:
- NFS: 2TB minus 4GB
- LVM (block): 2TB minus 4 GB
Reference: http://bit.ly/17eNuo7
|
.vhdx (incl. sharing and online resizing, vhd, pass-through (raw)
(No major change in WS2019)Block based MPIO can be used with Fibre Channel, iSCSI and SAS interfaces in Windows Server 2008 and Windows Server 2012/WS2012R2.
An MPIO solution can be deployed:
- By using a DSM (Device Specific Module aka plugin) provided by a 3rd party storage array manufacturer in a Fibre Channel, iSCSI, or SAS shared storage configuration.
- By using the Microsoft DSM, which is a generic DSM provided for Windows Server 2012 in a Fibre Channel, iSCSI, or SAS shared storage configuration.
Details on MPIO here: http://bit.ly/TDa7ut
For SMB based storage Server 2012 has introduces the SMB Multichannel functionality that provides scalable and resilient connections to SMB shares that dynamically create multiple connections for single sessions or multiple sessions on single connections depending on connection capabilities and demand (primarily for resiliency and load balancing).
In WIndows Server 2016, Microsoft has improved the SMB Multichannel configuration. Now each (v)NICs can be added to the same network and left without configuration. The cluster will manage the configuration automatically. This is the SImplified SMB Multichannel (https://technet.microsoft.com/en-in/windows-server-docs/failover-clustering/smb-multichannel)
Hyper-V in Server 2012 introduced an updated virtual disk format: VHDX. Pass-through (raw) disks and .VHD disks are also supported. You can convert a .vhd file to .vhd using Hyper-V Manager or VMM.
Server 2012 R2 added support for virtual hard disk resizing while the vm is running (only available for VHDX files that are attached to a SCSI controller). Additionally it enables clustering virtual machines by using shared virtual hard disk (VHDX) files.
In WIndows Server 2016, Microsoft has improved the shared VHDX (VHD Set). You can now online resize the disk, move it and backup it from the host level. The improvment is called VHD Set.
Use vhdx whenever possible - the only likely reason why not to use vhd is in case you expect to move the vm to an (older) host with a Hyper-V version not supporting vhdx.
The main VHDX features are:
- Support for virtual storage of up to 64 TB
- Protection against data corruption (logging updates to VHDX metadata)
- Improved alignment of the virtual hard disk format to work well on large sector physical disks.
Other:
- Larger block sizes for dynamic and differential disks (allowing these disks to attune to the needs of the workload)
- 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
- Ability to store custom metadata about the file (e.g. OS version, or updates that have been applied)
- trim - resulting in smaller files size and allowing the underlying physical storage device to reclaim unused space. (Trim requires direct-attached storage or SCSI disks and trim-compatible hardware.)
|
|
|
Yes
During the virtual disk creation, either a preallocated format (RAW) or a thinly provisioned Qcow2 (sparse) format can be specified.
A preallocated virtual disk has reserved storage of the same size as the virtual disk itself. The backing storage device (file/block device) is presented as is to the virtual machine with no additional layering in between. This results in better performance because no storage allocation is required during runtime. On SAN (iSCSI, FCP) this is achieved by creating a block device with the same size as the virtual disk. On NFS this is achieved by filling the backing hard disk image file with zeros. Pre-allocating storage on an NFS storage domain presumes that the backing storage is not Qcow2 formatted and zeroes will not be deduplicated in the hard disk image file. (If these assumptions are incorrect, do not select Preallocated for NFS virtual disks).
For sparse virtual disks backing storage is not reserved and is allocated as needed during runtime. This allows for storage over commitment under the assumption that most disks are not fully utilized and storage capacity can be utilized better. This requires the backing storage to monitor write requests and can cause some performance issues. On NFS backing storage is achieved simply by using files. On SAN this is achieved by creating a block device smaller than the virtual disks defined size and communicating with the hypervisor to monitor necessary allocations. This does not require support from the underlying storage devices.
|
Yes (Limitations on block)
XenServer supports 3 different types of storage repositories (SR)
1) File based vhd, which is a local ext3 or remote NFS filesystems, which supports thin provisioning for vhd
2) Block device-based vhd format (SAN based on FC, iSCSI, SAS) , which has no support for thin provisioning of the virtual disk but supports thin prove for snapshots
3) LUN based raw format - a full LUN is mapped as virtual disk image (VDI) so to is only applicable if the storage array HW supports that functionality.
|
64TB (vhdx), 2TB (vhd), 256TB+ (raw)
(no major updates with WS2019)
With Hyper-V / free Hyper-V Server 2012 / R2 the new .vhdx virtual disk format supports disk sizes up to 64 TB. Pass-through disks can be of over 256TB in size. The .vhd format is still limited to 2TB.
|
|
Thin Disk Provisioning
Details
|
No
This feature is on the roadmap
|
No
There is no NPIV support for XenServer
|
Yes (Dynamic Disks, Trim)
(no major updates with WS2019)
Server 2012 has been enhanced to use better just-in-time allocations of storage known as thin provisioning, and the ability to reclaim storage that is no longer needed, known as trim. Hyper-V has been using thinly provisioned disks (Dynamic Disks) for some time, MS typically recommends now to use fixed disks when using the .vhd format (best performance, resiliency) and dynamic disks when using the .vhd format.
Windows Server 2012 identifies and monitors thinly provisioned disks and also provides a new Trim API that allows the underlying physical storage device to reclaim unused space (supported with the new .vhd format). NTFS issues trim notifications when appropriate in real time. In addition, trim notifications are issued as a part of storage consolidation and optimization, which is performed regularly on a scheduled basis. (Trim requires direct-attached storage or SCSI disks and trim-compatible hardware.)
|
|
|
Yes
In RHEV by default, virtual machines created from templates use thin provisioning. In the context of templates, thin provisioning of vm means copy on write (aka linked clone or difference disk) rather than a growing file system that only takes up the storage space that it actually uses (usually referred to as thin provisioning). All virtual machines based on a given template share the same base image as the template and must remain on the same data domain as the template.
You can however specify to deploy the vm from template as clone - which means that a full copy of the vm will be deployed. When selecting to clone you can then select thin (sparse) or pre-allocated provisioning of the full clone. Deploying from template as clone results in independence from the base image but space savings associated with using copy on write approaches are lost.
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (Qcow2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-IO intensive virtual machines.
|
Yes - Clone on boot (new), clone, PVS, MCS
XenServer 6.2 introduced Clone on Boot
This feature supports Machine Creation Services (MCS) which is shipped as part of XenDesktop. Clone on boot allows rapid deployment of hundreds of transient desktop images from a single source, with the images being automatically destroyed and their disk space freed on exit.
General cloning capabilities: When cloning VMs based off a single VHD template, each child VM forms a chain where new changes are written to the new VM, and old blocks are directly read from the parent template. When this is done with a file based vhd (NFS) then the clone is thin provisioned. Chains up to a depth of 30 are supported but beware performance implications.
Comment: Citrixs desktop virtualization solution (XenDesktop) provides two additional technologies that use image sharing approaches:
- Provisioning Services (PVS) provides a (network) streaming technology that allows images to be provisioned from a single shared-disk image. Details: http://bit.ly/1ICMeqv
- With Machine Creation Services (MCS) all desktops in a catalog will be created off a Master Image. When creating the catalog you select the Master and choose if you want to deploy a pooled or dedicated desktop catalog from this image.
Note that neither PVS (for virtual machines) or MCS are included in the base XenServer license.
|
Yes (Virtual Fibre Channel)
(no major updates with WS2019)
Server 2012 has improved support for NPIV and enables you to connect Hyper-V / free Hyper-V Server virtual machines directly to your Fibre Channel Storage arrays by providing virtual Fibre Channel ports within the guest OS (instead of accessing it as virtualized storage through the host). NPIV lets multiple N_Port IDs share a single physical N_Port. This lets multiple Fibre Channel initiators use a single physical port, easing hardware requirements in SAN design, especially where virtual SANs (i.e. segregation) is needed . Virtual Fibre Channel for Hyper-V guests uses NPIV to create multiple NPIV ports on top of the hosts physical Fibre Channel ports. A new NPIV port is created on the host each time a virtual HBA is created inside a virtual machine. When the virtual machine stops running on the host, the NPIV port is removed.
Features/Benefit:
- Direct (native) access to Fibre Channel Storage from within the virtual machines - allowing native FC management (QoS, performance etc.)
- Cluster guest OSs over native Fibre Channel
- Virtual SAN (if a host is attached to multiple physical SANs using different physical HBAs you can define virtual SANs by grouping physical FC ports on the host that are attached to the same SAN)
- Up to 4 virtual FC ports per vm
- Supports live migration and MPIO (DSM within the vm)
Note: VMM no longer supports management of virtual machines deployed directly on LUNs exposed to Hyper-V through N_Port ID Virtualization (NPIV). Workaround: Use virtual fibre channels that are configured for virtual machines in Hyper-V.
|
|
|
No (native);
Yes (with Vendor Add-On: Red Hat Gluster Storage)
There is no native software based replication included in the base RHEV product.
However, there is support for managing Red Hat Gluster Storage volumes and bricks using Red Hat Enterprise Virtualization Manager. Red Hat Gluster Storage is a software-only, scale-out storage solution that provides flexible unstructured data storage for the enterprise.
Red Hat Storage Console (RHS-C) of Red Hat Storage Server (RHS) for On-Premise provides replication via the native capabilities of RHS, with integration in the RHEV-M interface.
RHS-C extends RHEV-M 3.x and oVirt Engine technology to manage Red Hat Trusted Storage Pools with management via the Web GUI, REST API and (future) remote command shell.
Note that Red Hat Gluster Storage is a fee-based add-on.
|
No
There is no integrated (software-based) storage replication capability available within XenServer
|
Yes (Differencing Disk)
(no major updates with WS2019)
Hyper-V/Hyper-V Server supports Differencing Disks with Server 2012 / R2 and 2008 R2. Differencing disks allow for the quick and space-saving creation of virtual machines by pointing to a shared parent disk and storing unique changes (writes) to the individual machines in a difference file that remains linked to the parent image. Reads will be served from the parent (unless the block has already been changed and writes accumulate in the difference file - increasing storage requirements over time).
With Server 2008 R2 Microsoft recommends though to avoid using differencing disks on virtual machines that run server workloads in a production environment (as running out of disk space when using diff disks can cause virtual machines to pause unexpectedly), see http://bit.ly/UxL3DK
Also with Server 2012 / R2, difference disks are typically used for virtual machines with shorter life cycle (i.e. lab environments or pooled VDI environments) due to the linked nature of the images (i.e. updating the parent breaks the link). For Server 2012 MS has not (yet?) updated the above article and only makes general performance recommendation in this document: http://bit.ly/UxHfT9
Quote: Difference disk Having only a few snapshots can elevate the CPU usage of storage I/So, but might not noticeably affect performance except in highly I/O-intensive server workloads. However, having a large chain of snapshots can noticeably affect performance because reading from the VHD can require checking for the requested blocks in many differencing VHDs. Keeping snapshot chains short is important for maintaining good disk I/O performance.
|
|
SW Storage Replication
Details
|
FS-Cache
FS-Cache is a persistent local cache that can be used by file systems to take data retrieved from over the network and cache it on local disk.
This helps minimize network traffic for users accessing data from a file system mounted over the network (for example, NFS). User can use this feature with mount options for NFS\POSIX.
|
IntelliCache and memory read cache
XenServer 6.5 sees the introduction of a read caching feature that uses host memory in the new 64-bit Dom0 to reduce IOPS on storage networks, improve LoginVSI scores with VMs booting up to 3x Faster. The read cache feature is available for XenDesktop & XenApp Platinum users who have an entitlement to this feature.
IntelliCache is a XenServer feature that can (only!) be used in a XenDesktop deployment to cache temporary and non-persistent operating-system data on the local XenServer host. It is of particular benefit when many Virtual Machines (VMs) all share a common OS image. The load on the storage array is reduced and performance is enhanced. In addition, network traffic to and from shared storage is reduced as the local storage caches the master image from shared storage.
IntelliCache works by caching data from a VMs parent VDI in local storage on the VM host. This local cache is then populated as data is read from the parent VDI. When many VMs share a common parent VDI (for example by all being based on a particular master image), the data pulled into the cache by a read from one VM can be used by another VM. This means that further access to the master image on shared storage is not required.
Reference: http://bit.ly/14Ko14u
|
Yes (Hyper-V Replica - Storage Replica)
Server 2012 introduced a new software-based replication called Hyper-V Replica for Hyper-V and free Hyper-V Server environments - an asynchronous replication of virtual machines over a network link from one Hyper-V host at a primary site to another Hyper-V host at a replica site without the need of a hardware based replication mechanism.
In the event of failure at the primary site, administrators can manually fail over production virtual machines to the Hyper-V server at the recovery site.
During failover, virtual machines are brought back to a consistent point in time, and they can be accessed by the rest of the network. Minimum interval of replication is 5 minutes (so there is the usual data loss associated with asynchronous replication) .
New in WS 2012 R2:
- You can also configure extended replication where the Replica server forwards information about changes that occur on the primary virtual machines to a third server for additional protection.
- The frequency of replication, which previously was a fixed value, is now configurable. You can also access recovery points for 24 hours. Previous versions had access to recovery points for only 15 hours.
Pros:
- Affordable DR solution (included in Server 2012, no extra licenses required)
- Does not require expensive SAN based replication
- designed to work with lower bandwidth / high latency connections - ideal for small and medium Enterprises
- Test Failover without impact on production vims
Cons:
- Manual failover and recovery actions
- Aimed at smaller scale deployments, typically not a suitable solution for replication of massive environments, added network traffic
- no integration with SAN array replication (like vSphere SRM), no synchronous replication
Windows Server 2016 brings Storage Replica as storage replication solution. This solution enables to replicate at the block-level a volume to another. This volume can be located in a cluster or a standalone server. You can also replicate two volumes located in the same server. This feature requires Windows Server 2016 Datacenter. For further information you can read this topic: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/storage-replica-overview
|
|
|
Yes
This is possible via local datacenter feature, but limited to a single host with reduced management features.
|
No
There is no specific Storage Virtualization appliance capability other than the abstraction of storage resources through the hypervisor.
|
CSV Cache; Write-Back Cache - Storage Spaces Direct Cache
In WS 2012 R2 there are two main features to provide perfromance improvements through caching
1. CSV Cache:
Windows Server 2012 introduced the CSV cache to provide (READ) caching on the block level by allocating system memory (RAM) as cache. This can significantly improve Hyper-V read performance (e.g. VDI boot storms).
New in WS 20012 R2:
- In Windows Server 2012, you could allocate only 20% of the total physical RAM to the CSV cache. You can now allocate up to 80%.
- In Windows Server 2012, the CSV cache was disabled by default. In Windows Server 2012 R2, it is enabled by default.
2. Storage Spaces Write- Back Caching:
Storage Spaces can use existing SSDs in the storage pool to create a write-back cache that buffers small random writes to SSDs before later writing them to HDDs.
- write-back cache is transparent to administrators and users and is created on all new virtual disks as long as there is a sufficient number of SSDs in the storage pool (Simple spaces: one SSD, Two-way mirror spaces and single-parity spaces: two SSDs, three-way mirror spaces and dual parity spaces: three SSDs)
- write-back cache works with all types of storage spaces, including storage spaces with storage tiers
- newly created storage spaces automatically use a 1 GB write-back cache (as long as the storage pool contains enough disks with MediaType set to SSD or Usage set to Journal)
3 - The Storage Spaces Direct provides a highly available cache system based on the fastest storage devices. In production it is recommended to deploy two storage devices. Storage Spaces Direct requires Windows Server 2016 Datacenter edition.
|
|
|
Yes (Limited)
RHEVs REST API does allow storage actions and storage provisioning calls via software storage actions and a backup API can also be leveraged with array cloning & replication for DR. It doesnt have vendor specific offloading abilities.
|
Integrated StorageLink (deprecated)
XenServer 6.5 sees the retirement of the Integrated StorageLink Feature, which is line with deprecation notice given in XenServer 6.2 and detailed in the XenServer 6.5 release notes
With XenServer 6.2 Citrix announced that Integrated StorageLink (iSL) is a deprecated feature (development stopped on it and removal in a future release).
Background: XenServer 6 introduced Integrated StorageLink Capabilities. It replaces the StorageLink Gateway technology used in previous editions and removes the requirement
to run a VM with the StorageLink components. It provides access to use existing storage array-based features such as data replication, de-duplication, snapshot and
cloning. Citrix StorageLink allows to integrate with existing storage systems, gives a common user interface across vendors and talks the language of the storage array i.e. exposes the native feature set in the storage array. StorageLink also provides a set of open APIs that link XenServer and Hyper-V environments to third party backup solutions and enterprise management frameworks. There is a limited HCL for StorageLink supporting arrays. Details: http://bit.ly/1xS2J0m
|
(no major updates with WS2019)Improved (Storage Spaces); Tiered Storage, Write-Back Cache - Storage Spaces Direct
Windows Server 2012 introduced the concept of Storage Spaces (supported for Hyper-V and free Hyper-V Server environments). In simplified terms Storage Spaces allows you to use inexpensive SAS/SATA drives (JBODS) to provide more advanced pooled and redundant storage through a Storage Virtualization layer. First you create a storage pool specifying the physical disk drives to be used and subsequently you create virtual disks (not to be confused with vhd for virtual machines) within the pool. These disks are appearing then to the host like new physical disks with changed attributes (resilience, capacity etc.) by adding mirroring or parity to the disk and pools can be dynamically expanded. These can now be used for local storage, failover clustering and SMB Direct (RDMA).
WS 2012 R2 introduced significant enhancements to Storage Spaces:
1. Tiered Storage
Ability to to provide tiered storage (that automatically moves frequently accessed data to faster (SSD) storage and infrequently accessed data to slower (HDD) storage).
- To create a storage space with storage tiers, the storage pool must have a sufficient number of hard disks and SSDs to support the selected storage layout, and the disks must contain enough free space.
- When creating a storage space using the New Virtual Disk Wizard or the New-VirtualDisk cmdlet you can now specify to create the virtual disk with storage tiers.
- The virtual disk must use fixed provisioning, and the number of columns will be identical on both tiers (a four-column, two-way mirror with storage tiers would require eight SSDs and eight HDDs).
- Volumes created on virtual disks that use storage tiers should be the same size as the virtual disk.
- Administrators can pin (assign) files to the standard (HDD) or faster (SSD) tier by using the Set-FileStorageTier cmdlet, ensuring that the files are always accessed from the appropriate tier.
2. Storage Spaces Write- Back Caching:
Storage Spaces can use existing SSDs in the storage pool to create a write-back cache that buffers small random writes to SSDs before later writing them to HDDs.
- write-back cache is transparent to administrators and users and is created on all new virtual disks as long as there is a sufficient number of SSDs in the storage pool (Simple spaces: one SSD, Two-way mirror spaces and single-parity spaces: two SSDs, three-way mirror spaces and dual parity spaces: three SSDs)
- write-back cache works with all types of storage spaces, including storage spaces with storage tiers
- newly created storage spaces automatically use a 1 GB write-back cache (as long as the storage pool contains enough disks with MediaType set to SSD or Usage set to Journal)
3. Parity space support for failover clusters, dual parity and automatic rebuild from storage pool free space (rather than hot spares)
- You can now use parity spaces to maximize capacity and resiliency while still offering the ability to fail over to another cluster node
- Dual parity enables you to keep a high level of resiliency when using a parity space with a large number of disks or any time when you need to help protect against two simultaneous disk failures
- When a disk fails, instead of writing a copy of the data that was on the failed disk to a single hot spare, the data is copied to multiple drives in the pool such that the previous level of resiliency is achieved
Please note: Storage spaces can be provided from a single file server node using (non-raided!) SAS or SATA JBODs. For clustered nodes you will need SAS JBODs (not SATA) in a SHARED JBOD enclosure (Storage Spaces does NOT present virtualized local storage as shared storage like e.g. VMware VSA). MS will list certified JBOD enclosures.
Advantages:
- Provide reliable and scalable storage with reduced cost (cheap JBODs)
- Aggregate individual drives into storage pools
- Expand storage pools on demand
- Deploy specific drives as hot spares
Limitations:
- Fibre-channel and iSCSI are not supported
- SAS or SATA JBODs only, no HW Raid, for clustering SAS only and a shared SAS (no SATA) JBOD enclosure
- A mirrored pool must have at least 2 drives, three drive minimum required for using parity
- Virtual disks to be used with a failover cluster must use the NTFS file system not ReFS, cant use thin provisioning and must use the mirrored (not parity) option.
- Performance considerations (software RAID, JBODs of different types, multiple virtual disks per pool etc.)
Storage Spaces FAQ here: http://bit.ly/UxZ1FP
Windows Server 2016 brings Storage Spaces Direct which is a solution that enables to use direct attach storage devices in each node to create a highly available storage system. With this solution, you can deploy hyperconverged solution. The storage devices must be NVDIMM (SCM), NVMe, SATA or SAS. The disks can be SSD or HDD. You can mix the storage device type to create multi-resilient virtual disk to maximize the performance and the storage efficiency. For more information you can read this topic: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview
Storage Spaces Direct requires Windows Server 2016 Datacenter edition
|
|
Storage Integration (API)
Details
|
Yes (Quota, Storage I/O SLA)
NEW
RHEV 3.6 includes quota support and Service Level Agreement (SLA) for storage I/O bandwidth:
- Quota provides a way for the Administrator to limit the resource usage in the system including vDisks. Quota provides the administrator a logic mechanism for
managing disks size allocation for users and groups in the Data Center.
- Disk profile limit the bandwidth usage to allocate the bandwidth in a better way on limited connections.
|
Basic
Virtual disks on block-based SRs (e.g. FC, iSCSI) have an optional I/O priority Quality of Service (QoS) setting. This setting can be applied to existing virtual disks using the xe CLI.
Note: Bare in mind that QoS setting are applied to virtual disks accessing the LUN from the same host. QoS is not applied across hosts in the pool!
|
SMI-S / SMP, ODX, Trim, ReFS Accelerated VHDX Operation
(no major updates with WS2019)Server 2012 deliverd significantly improved storage management.
- The Windows Storage Management API (SMAPI)
- PowerShell cmdlets enabling scriptable and remote administration
- A new interface - the Storage Management Provider framework - enabling customers to easily manage 3rd-party Storage subsystems using either a Storage Management Provider (SMP), or SMI-S Provider.
Microsoft has introduced several storage related integration points with Server 2012 that improve virtualization environments. You can now e.g. manage SMI-S / SMP compliant storage arrays directly from with VMM 2012 for virtualization related task (creating LUNs etc.).
Improved storage integration capabilities include:
- Management of compliant arrays directly from VMM 2012
- Windows Offloaded Data Transfer (ODX) enables the arrays to directly transfer data within or between compatible storage devices, bypassing the host computer. Requires ODX-compatible storage arrays and applications (e.g. Hyper-V operations that transfer large amounts of data like creating a fixed size virtual hard disk, merging snapshot or converting virtual hard disks)
- Trim: Windows Server 2012 provides a new API that enables applications to release storage when it is no longer needed. NTFS issues trim notifications when appropriate in real time. In addition, trim notifications are issued as a part of storage consolidation and optimization, which is performed regularly on a scheduled basis. Requires Trim compatible hardware
The improvment of ReFS in Windows Server 2016 brings Accelerated VHDX Operation. This accelerates the checkpoint merging process and the creation of fixed VHDX. ReFS should be used only when deploying Storage Spaces Direct.
|
|
|
Various new enhancements + Neutron Integration (Tech Preview) - click for details
NEW
At present RHEV allows to:
- Do simplified management network setup that includes host level management.
- Assign migration\management\VM\host networks roles.
- Create profile for virtual machine NIC with specific parameters.
- Network QoS on host NICs and on virtual NIC profiles.
- Multiple network gateways per host (define a gateway for each logical network on a host).
- Refresh and automtic sync host network configuration (allows the administrator to obtain and set updated network configuration).
- Improved bond support (add new bonds from the administration portal, in addition to the five predefined bonds for each host).
- Network labels to ease complex hypervisor networking configurations, comprising many networks.
- Predictable vNIC ordering inside guest OS for newly-created VMs.
- Hypervisors now recognize hotplugged network interfaces.
- Notifications in case of bond/NIC changing link state (e.g. link failure).
- Ability to configure custom properties on hypervisor network devices; specifically configuring advanced bridge and ethtool options.
- Dedicated network connectivity log on hypervisors to ease investigation in case of 'disaster'.
- Properly display arbitrarily-named hypervisor VLAN devices in the management console.
RHEV 3.6 add abilities to:
- Use SR-IOV NICs by enabling you to create virtual functions and assign them to VMs.
- Report total network use of a VM.
- Get info on out of sync hosts from a network definition aspect and allowing to sync them.
- Support for Cisco UCSM VM-FEX hook.
OpenStack Neutron as a network provider is currently a tech preview on Red Hat Enterprise Virtualization Manager. OpenStack Neutron can provide networking capabilities for consumption by hosts and virtual machines.
The integration includes:
• Advanced engine for network configuration.
• open vSwitch distributed virtual switching support.
• Ability to centralize network configurations with Red Hat Enterprise Linux OpenStack Platform (not included).
|
Yes (Open vSwitch) - distributed vSwitch
The Open vSwitch remains fully supported and developed with the earlier deprecation notice issued with XenServer 6.2 now rescinded within XenServer 6.5
Background: Open vSwitch is the default networking stack in XenServer 6.x.
An Open vSwitch (OVS) network flow is a match between a network packet header and an action
such as forward or drop. A typical server VM could have hundreds or more connections to clients,
and OVS needs to have a flow for each of these connections. As the number of VMs on the host
builds up, the OVS flow table in the Dom0 kernel fills and induces round-trips to the OVS userspace
process, which can degrade the network throughput to and from guests. In OVS v1.4, present in
XenServer 6.2, the flow had to have an exact match for the header. XenServer 6.5 includes the latest
version, OVS 2.1.3, which supports megaflows.
Megaflow support reduces the number of required entries in the flow table for most common
situations and improves the ability of Dom0 to handle many server VMs connected to a large
number of clients.
Details in the XenServer 6.5 vSwitch Controller Users Guide: http://bit.ly/1zLZ8NI
|
Distributed Storage QoS
(no major updates with WS2019)WS 2012 R2 introduced a new Storage QoS function that enables you to specify the maximum and minimum I/O loads in terms of I/O operations per second (IOPS) for each virtual disk in your virtual machines (ensuring that the storage throughput of one virtual hard disk does not impact the performance of another virtual hard disk on the same host).
- Specify the maximum IOPS allowed for a virtual hard disk that is associated with a virtual machine.
- Specify the minimum IOPS - receive a notification when the specified minimum IOPS for a virtual hard disk is not met.
- Monitor storage-related metrics through the virtual machine metrics interface.
Details here: http://bit.ly/1ayzAuW
Previous to WS 2012 R2 there was no native method of implementing storage I/O control for disk objects of hosts or virtual machines in Server 2012.
However storage control was possible if the storage is network based (e.g. iSCSI, NFS or SMB) by regulating the network traffic that carries the storage I/O. Please note that control is limited to either the physical network port or the virtual port on the hyper-v switch). As an individual virtual disk that is e.g. stored on SMB is presented as disk object to the virtual machine (not associated with a unique virtual network port) control for individual virtual disk objects is typically not possible.
Background for network based I/O control:
Windows Server 2012 introduces QoS control for physical networks as well as virtual networks (on the port level of the hyper-V virtual switch) with the ability to guarantee a minimum bandwidth to a virtual machine or a service.
With System Center 2012 (min SP1) you can configure some of the new networking settings centrally (but System Center is not required) including Bandwidth settings for virtual machines and IEEE priority tagging for QoS prioritization - PowerShell will be required for the more advanced networking options.
Windows Server 2012 also takes advantage of Data Center Bridging (DCB)-capable hardware to converge multiple types of network traffic on a single network adapter, with a guaranteed level of service to each type.
Unlike in Server 2008 R2 (where only the maximum bandwidth was configurable - without being able to guarantee a minimum bandwidth) Server 2012 introduces a bandwidth floor - the ability to guarantees a certain amount of bandwidth to a specific type of traffic (port or virtual machine).
Windows Server 2012 offers two different mechanisms to enforce minimum bandwidth:
- through the enhanced packet scheduler in Windows (provides a fine granularity of classification, best choice if many traffic flows require minimum bandwidth enforcement, example would be a server running Hyper-V hosting many virtual machines, where each virtual machine is classified as a traffic flow)
- through network adapters that support Data Center Bridging (supports fewer traffic flows, however, it can classify network traffic that doesnt originate from the networking stack e.g. can be used with a CNA adapter that supports iSCSI offload, in which iSCSI traffic bypasses the networking stack - because the packet scheduler in the networking stack doesnt process this offloaded traffic, DCB is the only viable choice to enforce minimum bandwidth)
Another example is Server Message Block Direct (SMB Direct), a Windows Server 2012 feature that builds on Remote Direct Memory Access (RDMA). SMB Direct offloads the SMB traffic directly to an RDMA-capable NIC to reduce latency and the number of CPU cycles that are spent on networking.
Again, you can use Data Center Bridging (implemented by some NIC vendors) in network adapters. DCB works in a similar way as Minimum Bandwidth, each class of traffic, regardless of whether its offloaded or not, has bandwidth allocation; in the event of network congestion, each class gets its share - otherwise, each class gets as much bandwidth as is available.
In both cases, network traffic first must be classified (built-in classifications in Server 2012 include iSCSI, NFS, SMB and SMB Direct). Windows classifies a packet itself or gives instructions to a network adapter to classify it. The result of classification is a number of traffic flows in Windows, and a given packet can belong to only one of them.
Details here: http://bit.ly/Ueb07L
In Windows Server 2016, Microsoft brings the distributed Storage QoS. The Storage QoS policy are stored in the cluster database. They can be applied to a single VM or to multiple VMs. The Storage QoS policy is aware about the underlying storage IOPS and can manage centrally the IOPS distribution. The QoS policy can limit the minimum and the maximum IOPS or/and the bandwith to a virtual disk in bytes per second. https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-qos/storage-qos-overview
|
|
|
|
Networking |
|
|
Advanced Network Switch
Details
|
Yes
RHEV support 4 bonding modes with easy management of their definition on the host.
Details:
Mode 1 (active-backup policy) sets all interfaces to the backup state while one remains active. Upon failure on the active interface, a backup interface replaces it as the only active interface in the bond. The MAC address of the bond in mode 1 is visible on only one port (the network adapter), to prevent confusion for the switch. Mode 1 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
Mode 2 (XOR policy) selects an interface to transmit packages to based on the result of an XOR operation on the source and destination MAC addresses multiplied by the modulo slave count. This calculation ensures that the same interface is selected for each destination MAC address used.
Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Enterprise Virtualization.
Mode 4 (IEEE 802.3ad policy) creates aggregation groups for which included interfaces share the speed and duplex settings. Mode 4 uses all interfaces in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Enterprise Virtualization.
Mode 5 (adaptive transmit load balancing policy) ensures the outgoing traffic distribution is according to the load on each interface and that the current interface receives all incoming traffic. If the interface assigned to receive traffic fails, another interface is assigned the receiving role instead. Mode 5 is
supported in Red Hat Enterprise Virtualization.
|
Yes (incl. LACP - New)
XenServer 6.1 added the following functionality, maintained with XenServer 6.5:
- Link Aggregation Control Protocol (LACP) support: enables the use of industry-standard network bonding features to provide fault-tolerance and load balancing of network traffic.
- Source Load Balancing (SLB) improvements: allows up to 4 NICs to be used in an active-active bond. This improves total network throughput and increases fault tolerance in the event of hardware failures. The SLB balancing algorithm has been modified to reduce load on switches in large deployments.
Background:
XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:
• LACP bonding is only available for the vSwitch, while active-active and active-passive are available for both the vSwitch and Linux bridge.
• When the vSwitch is the network stack, you can bond either two, three, or four NICs.
• When the Linux bridge is the network stack, you can only bond two NICs.
XenServer 6.1 provides three different types of bonds, all of which can be configured using either the CLI or XenCenter:
• Active/Active mode, with VM traffic balanced between the bonded NICs.
• Active/Passive mode, where only one NIC actively carries traffic.
• LACP Link Aggregation, in which active and stand-by NICs are negotiated between the switch and the server.
Reference: http://bit.ly/1E2HvQ7
|
Yes (Hyper-V Virtual Switch); Extended Port ACLs, vRSS, DLB - Switch Embedded Teaming - Network Controller
(no major updates with WS2019)Arguable the biggest improvements to virtualization and cloud environments with Server 2012 have come with the addition of the Hyper-V Extensible Switch (Hyper-V as well as free Hyper-V Server).
Its essentially an Ethernet switch that runs in the management operating system of the Hyper-V parent partition and allows to connect virtual machines to both virtual networks and the physical network. In addition, the Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels.
The Hyper-V Virtual Switch enables ISVs to create extensible plug-ins (Virtual Switch Extensions) that can provide enhanced networking and security capabilities. Management of the Hyper-V extensible switch, including deployment and configuration of virtual switch extensions using a new logical switch concept can be done through VMM 2012 (min SP1).
Note: The Extensible Switch is strictly speaking not a natively distributed switch (i.e. there are individual switch instances on each host). However, using the Virtual Switch Extensions Hyper-V can provide more advanced and fully distributed virtual switches (e.g. using Ciscos 1000v) but these are typically fee-based.
New with WS 2012 R2 are the following enhancements:
1. Extended Port ACLs (configure ACLs to provide firewall protection and enforce security policies for the tenant VMs in their datacenters)
- In Windows Server 2012, you were able to specify both source and destination MAC and IP addresses for IPv4 and IPv6. For Windows Server 2012 R2 you can also specify the port number when you create rules
- Stateful rules that are unidirectional and provide a timeout parameter (after the rule is utilized successfully one time, the two traffic flows are allowed without having to be looked up against the rule again for a period of time that you designate using the timeout attribute)
2. Dynamic Load Balancing (DLB) of traffic types (WS 2012 provided simultaneous load distribution and failover, but did not ensure load distribution between the NICs in a NIC team in a balanced manner - in WS 2012 R2, dynamic load balancing continuously and automatically moves traffic streams from NIC to NIC within the NIC team to share the traffic load as equitably as possible).
3. Hyper-V Network Virtualization coexists with third party forwarding extensions
When you have a third party forwarding extension installed, Hyper-V Virtual Switch now performs hybrid forwarding. With hybrid forwarding, network traffic that is NVGRE encapsulated is forwarded by the HNV module within the switch, while all non-NVGRE network traffic is forwarded by the third party forwarding extensions that you have installed.
In addition to forwarding, a third party forwarding extension can still apply other policies, such as ACLs and QoS, to both the NVGRE and the non NVGRE-encapsulated traffic. The forwarding extension that you install must be able to process both types of network traffic based on their intended destinations. The policies and capabilities of the Hyper-V Virtual Switch and third party extensions do not displace each other – instead, they are mutually available.
4. Reduced bottle-necks with vRSS
In WS 2012 Receive Side Scaling (RSS spreads the processing across multiple cores on the host and multiple cores on the VM) over SR-IOV was supported; now in Windows Server 2012 R2, virtual RSS (vRSS) is supported on the VM network path, allowing VMs to sustain a greater networking traffic load. To take advantage of vRSS, VMs must be configured to use multiple cores, and they must support RSS. vRSS is enabled automatically when the VM uses RSS on the VM network path.
Other key features of the Hyper-V Virtual Switch:
Multi-tenancy - through the Extensible Switch, Server 2012 provides now the isolation and security capabilities required for multi-tenancy by offering:
- Multitenant virtual machine isolation through PVLANs
- Protection from Address Resolution Protocol/Neighbour Discovery (ARP/ND) poisoning (also called spoofing)
- Protection against DHCP snooping and DHCP Guard
- Virtual port ACLs
- The capability to trunk traditional VLANs to virtual machines
- Monitoring Windows PowerShell/Windows Management Instrumentation (WMI)
The Hyper-V Extensible Switch can be configured as:
- External (ports that connect to a single external NIC as well as one or more virtual NICs in vims)
- Internal (allowing communication between parent and child partitions, i.e. host and guests)
- Private (communication between child partitions only)
Details here: http://bit.ly/Ue8K0u
In Windows Server 2016, Microsoft has improved the virtual switch because it manages now the teaming. It is not necessary anymore to create teaming and then create the virtual switch on top. This is called Switch Embedded Teaming. Thanks to SET, we can now enable RDMA, vRSS on virtual NIC located in the hyperviseur (parent partition). This enables to converge everything on two network adapters (Storage, Live-Migration, Management, Heartbeat and so on). https://blogs.technet.microsoft.com/wsnetdoc/2015/09/01/switch-embedded-teaming-network-adapter-teaming-within-hyper-v-virtual-switch/
Moreover Microsoft has improved the Software-Defined Networking layer. The all layer can be managed accross the Network Controller. Network Controller provides API to manage all the network devices (virtual switches, software load balancer and so on). The network controller can be managed accross System Center Virtual Machine Manager. The Network Controller requires Windows Server 2016 Datacenter edition
In WIndows Server 2016, the SDN layer supports VXLAN
|
|
|
Yes
With RHEV the Network Interfaces tab of the details pane shows VLAN information for the edited network interface. In the VLAN column newly created VLAN devices are shown, with names based on the network interface name and VLAN tag.
Background: RHEV VLAN is aware and is able to tag and redirect VLAN traffic, however VLAN implementation requires a switch that supports VLANs.
At the switch level, ports are assigned a VLAN designation. A switch applies a VLAN tag to traffic originating from a particular port, marking the traffic as part of a VLAN, and ensures that responses carry the same VLAN tag. A VLAN can extend across multiple switches. VLAN tagged network traffic on a switch is completely undetectable except by machines connected to a port designated with the correct VLAN. A given port can be tagged into multiple VLANs, which allows traffic from multiple VLANs to be sent to a single port, to be deciphered using software on the machine that receives the traffic.
|
Yes (limited for mgmt. traffic)
VLANs are supported with XenServer. To use VLANs with virtual machines use switch ports configured as 802.1Q VLAN trunk ports in combination with the XenServer VLAN feature to connect guest virtual network interfaces (VIFs) to specific VLANs (you can create new virtual networks with XenCenter and specify the VLLAN IDs). The XenServer management interfaces cannot be assigned to a XenServer VLAN via a trunk port - to place management traffic on a desired VLAN the switch ports need to be configured to perform 802.1Q VLAN tagging/untagging (native VLAN or as access mode ports). In this case the XenServer host is unaware of any VLAN configuration.
XenServer 6.1 removed a previous limitation which caused VM deployment delays when large numbers of VLANs were in use. This improvement enables administrators using XenServer 6.x to deploy hundreds of VLANs in a XenServer pool quickly.
|
Yes (LBFO); Dynamic Load Balancing - Switch Embedded Teaming
(no major updates with WS2019)With Server 2012 MS finally introduced native NIC teaming capabilities in the OS (rather than relying on third-party solutions) - providing integrated bandwidth aggregation and traffic failover.
Microsoft also refers to it as Load Balancing/Failover (LBFO).
- works with (and between) different NIC vendors, fully supported by MS
- supports up to 32 NICs in a team
New in WS 2012 R2 is the Dynamic Load Balancing (DLB) of traffic types (WS 2012 provided simultaneous load distribution and failover, but did not ensure load distribution between the NICs in a NIC team in a balanced manner - in WS 2012 R2, dynamic load balancing continuously and automatically moves traffic streams from NIC to NIC within the NIC team to share the traffic load as equitably as possible).
Restrictions - NIC teaming is compatible with all networking capabilities in Windows Server 2012 R2 with five exceptions:
- SR-IOV
- Remote Direct Memory Access (RDMA)
- Native host Quality of Service
- TCP Chimney Offload
- 802.1X Authentication
Algorithms:
Option 1: Switch-independent mode; NICs can connect to different switches, doesnt require the switch to participate in the teaming, typically less sophisticated load balancing
Option2: Switch-dependent mode; requires a compatible switch to participate in the teaming, all interfaces of the team are connected to the same switch, typically supports more advanced load balancing
Load Balancing modes for both switch-dependent and independent (new with R2):
- Address Hash distribution
- Hyper-V Port distribution
- Dynamic distribution
Generally the Dynamic Distribution will provide better performance and should be used - details on which mode to deploy in WS 2012 R2 (a scary 52 pages white paper) here: http://bit.ly/1ayFmwk
In Windows Server 2016, Microsoft has improved the virtual switch because this last can now manage the NIC Teaming. This feature is called Switch Embedded Teaming. Thanks to SET, you no longer need to create a NIC Teaming and then, create the virtual switch. With SET, you specify the network adapters that will be part of the teaming inside the virtual switch. Thanks to SET, you can now enable RDMA, vRSS and DCB on virtual network adapters located in the parent partition. This enable to converge all traffic accross two network adapters.
Set supports only the switch independent load-balancing mode and dynamic or Hyper-V port algorithm. Microsoft recommends to use the dynamix algorithm. For further information about SET, you can read this topic: https://blogs.technet.microsoft.com/wsnetdoc/2015/09/01/switch-embedded-teaming-network-adapter-teaming-within-hyper-v-virtual-switch/
|
|
|
Yes (via host hooks)
Currently PVLAN support in RHEV is done via host hook.
Background: Hooks are scripts executed on the host when key events occur. The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on Red Hat Enterprise Linux virtualization hosts. The use of VDSM Hooks on virtualization hosts running Red Hat Enterprise Virtualization Hypervisor (RHEV-H) is not currently supported.
|
No
XenServer does not support PVLANs.
Please refer to the Citrix XenServer Design: Designing XenServer Network Configurations guide for details on network design and security considerations http://bit.ly/14LW9b9
|
Yes
(no major updates with 2019)
Yes, supports static VLAN and VLAN tagging
|
|
|
Guests fully, hypervisors partially
RHEV currently uses IPv4 for internal communications and does not use/support IPv6. Using IPv6 at the virtual machine level is fully supported though, provided that youre using a guest operating system thats compatible.
RHEV includes network custom properties, which may be used to assign IPV6 addresses to host interfaces using the vdsm-hook-ipv6 hook.
|
Yes (guests only)
XenServer 6.1 introduced formal support for IPv6 in XenServer guest VMs (maintained with 6.5). Customers already used it with e.g. 6.0 but the 6.1 release notes list this as new official feature: IPv6 Guest Support: enables the use of IPv6 addresses within guests allowing network administrators to plan for network growth.
Full support for IPv6 (i.e. assigning the host itself an IPv6 address) will be addressed in the future.
|
Yes
(no major updates with 2019) Server 2012 introduced support for PVLANs, which provides isolation between virtual machines on the same VLAN. (A common example is a guest network in a hotel where all guest want to communicate with the outside world e.g. through a router - but avoid communication with each-other).
You can do this by assigning every virtual machine in a PVLAN one primary VLAN ID and one or more secondary VLAN IDs. The PVLAN ports can operate in one of three modes
- Isolated (cant communicate at layer 2)
- Promiscuous (Promiscuous ports can communicate with any port on the same primary VLAN ID - this would be the router in our hotel example)
- Community ports on the same VLAN ID can communicate at layer 2)
|
|
|
Yes
RHEV has added ability to passthorough hosts PCI and USB devices including GPUs ans use of SR-IOV via VFIO and high speed PCI-express based SSD storage.
|
SR-IOV
XenServer 6.0 provided improved SR-IOV support, maintained with v 6.5
Note that SR-IOV is supported only with SR-IOV enabled NICs listed on the XenServer Hardware
Compatibility List and only when used in conjunction with a Windows Server 2008 guest and requires Intel VT-d capable systems. Generally with SR-IOV VF, functions that require VM mobility like live migration, workload balancing, rolling pool upgrade, High Availability and Disaster Recovery are not possible.
|
Yes
(no major updates with w2019)
Windows Server 2012 has full IPv6 support for host and vms (parent and child partitions).
The new features of IPv6 in Windows Server 2012 are
- better connectivity on the Internet (IPv6 attempts to connect, If this connection fails, Windows marks the network interfaces as unusable for IPv6, causing the IPv6 destination addresses for the Internet properties to be unreachable - reaching it through IPv4 instead)
- a protocol translation service for DirectAccess clients NAT64/DNS64 work together to translate incoming connection traffic from an IPv6 node to IPv4 traffic. The DNS64 resolves the name of an IPv4-only host to a translated IPv6 address. The NAT64 translates the incoming IPv6 traffic to IPv4 traffic and performs the reverse translation for response traffic)
- better manageability of IPv6 settings through Windows PowerShell (The ability to manage IPv6 settings in the common management environment of Windows PowerShell, including scripts and workflows)
|
|
|
Yes
The network management UI in RHEV allows you to set the MTU for network interfaces (jumbo frames).
|
Yes
You can set the Maximum Transmission Unit (MTU) for a XenServer network in the New Network wizard or for an existing network in its Properties window. The possible MTU value range is 1500 to 9216.
|
Yes - SRIOV (incl. Live Migration)
(no major updates with WS 2019)
In order to achieve native I/O for the network in virtual machines Windows Server 2012 (maintained with R2) added the ability to assign SR-IOV functionality from physical devices directly to virtual machines. This gives virtual machines the ability to bypass the software-based Hyper-V Virtual Switch, and directly access the network adapter. As a result, CPU overhead and latency is reduced, with a corresponding rise in throughput.
Requirements:
- host hardware that supports SR-IOV (e.g. CPU with Intel VT-d2, chipset support for interrupt and DMA remapping)
- SR-IOV-capable NIC in the virtualization host
Features:
- Live Migration is fully supported. You can Live Migrate a VM using SR-IOV to another host that either does or does not support SR-IOV, and back again. The VM will use SR-IOV if it is available on the target host, and if SR-IOV is unavailable, it will use the traditional software network path.
Limitations:
- Nick Teaming & SR-IOV: when a NIC team is created on top of SR-IOV capable physical NICs, the SR-IOV capability is not propagated. So you cant team in the parent partition and use the same NICs for SR-IOV in the vims at the same time (instead present two SR-IOV NICs to the vims and team them in the guest OS)
- SR-IOV causes the virtual machines traffic to bypass the Hyper-V Virtual Switch. If any switch port policies are set, SRIOV functionality is revoked for that virtual machine
|
|
|
TOE
Currently, RHEV hypervisors support TOE. Due to the way that RHEV provides networking access to the virtual machines, using technologies such as TSO/LRO/GRO are currently unsupported. This is set to change when Open vSwitch is supported in upcoming versions.
|
Yes (TSO)
TCP Segmentation Offload can be enabled, see http://bit.ly/13e9WLi
By default, Large Receive Offload (LRO) and Generic Receive Offload (GRO) are disabled on all physical network interfaces. Though unsupported, you can enable it manually http://bit.ly/14l79kp
|
Yes
(no major updates with WS 2016) Support for Jumbo frames has been added with 2008 R2 and is maintained with Server 2012 and 2012 R2.
These are the basic steps to enable Jumbo frames in Server 2012 for you virtual machines:
- Networking hardware must support jumbo packets and the feature must be enabled
- Hyper-V virtual switch must have jumbo frames enabled
- The individual virtual NICs must have jumbo frames enabled
|
|
|
Yes (vNIC Profile)
RHEV added the ability to control Network QoS using virtual NIC profiles through the RHEV-M interface.
Users can now limit the inbound and outbound network traffic on a virtual NIC level by applying profiles which define attributes such as port mirroring, quality of service (QoS) or custom properties.
|
Yes (outgoing)
QoS of network transmissions can be done either at the vm level (basic) by setting a ks/sec limit for the virtual NIC or on the vSwitch-level (global policies). With the DVS you can select a rate limit (with units), and a burst size (with units). Traffic to all virtual NICs included in this policy level (e.g. you can create vm groups) is limited to the specified rate, with individual bursts limited to the specified number of packets. To prevent inheriting existing enforcement the QoS policy at the VM level should be disabled.
Background:
To limit the amount of outgoing data a VM can send per second, you can set an optional Quality of Service (QoS) value on VM virtual interfaces (VIFs). The setting lets you specify a maximum transmit rate for outgoing packets in kilobytes per second.
The QoS value limits the rate of transmission from the VM. As with many QoS approaches the QoS setting does not limit the amount of data the VM can receive. If such a limit is desired, Citrix recommends limiting the rate of incoming packets higher up in the network (for example, at the switch level).
Depending on networking stack configured in the pool, you can set the Quality of Service (QoS) value on VM virtual interfaces (VIFs) in one of two places-either a) on the vSwitch Controller or b) in XenServer (using theCLI or XenCenter).
|
Yes (RSS,VMQ,VMMQ, Ipsec Offload, SR-IOV )
(no major updates with WS 2019)
Server 2012 supports various functions that improve the networking performance by offloading operations to specialized hardware.
- SR-IOV (assign SR-IOV -capable network adapters directly to a virtual machine to maximize network throughput while minimizing network latency and CPU overhead.)
- Dynamic Virtual Machine Queue (D-VMQ) - essentially allows the hosts single network adapter to appear as multiple network adapters to the virtual machines, allowing each virtual machine its own dedicated network adapter, resulting in less data in the hosts buffers and an overall performance improvement to I/O operations. New in Server 2012: dynamically distributes incoming network traffic processing to host processors, based on processor use and network load. If network load is heavy, D-VMQ automatically uses more processors - with light network load, Dynamic VMQ uses fewer CPUs.
- IPsec Task Offload for Virtual Machines (uses hardware capabilities of server network adaptors to offload IPsec processing. This reduces the CPU overhead of IPsec encryption and decryption)
- Accelerating network traffic through Receive-side scaling and Receive Segment Coalescing
Note: In Server 2012 Virtual machine (VM) Chimney, also called TCP Offload, has been removed (incompatible with native NIC teaming in Server 2012) so TCP chimney will not be available to guest operating systems.
|
|
|
Yes (Port Mirroring)
RHEV has port mirroring capabilities.
It is now possible to configure the virtual Network Interface Card (vNIC) of a virtual machine to run in promiscuous mode. This allows the virtual machine to monitor all traffic to other vNICs exposed by the host on which it runs. Port mirroring copies layer 3 network traffic on given a logical network and host to a virtual interface on a virtual machine. This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network.
RHEV also adds the ability to create a Virtual Network Interface Controller (VNIC) profile to toggle port monitoring. There are also the vendor-supplied UI plug-ins to RHEV-M, e.g. the Nagios community plugin.
|
Yes (Port Mirroring)
The XenServer vSwitch has Traffic Mirroring capabilities. The Remote Switched Port Analyzer (RSPAN) policies support mirroring traffic sent or received on
a VIF to a VLAN in order to support traffic monitoring applications. Use the Port Configuration tab in the vSwitch Controller UI to configure policies that apply to the VIF ports.
|
Yes (max and min bandwidth); SMB traffic types
(no major updates with WS 2019)
Windows Server 2012 introduces QoS control for networking with the ability to guarantee a minimum bandwidth to a virtual machine or a service.
With System Center 2012 (min SP1) you can configure some of the new networking settings centrally (but System Center is not required) including Bandwidth settings for virtual machines and IEEE priority tagging for QoS prioritization - PowerShell will be required for the more advanced networking options.
Windows Server 2012 also takes advantage of Data Center Bridging (DCB)-capable hardware to converge multiple types of network traffic on a single network adapter, with a guaranteed level of service to each type.
Unlike in Server 2008 R2 (where only the maximum bandwidth was configurable - without being able to guarantee a minimum bandwidth) Server 2012 introduces a bandwidth floor - the ability to guarantees a certain amount of bandwidth to a specific type of traffic (port or virtual machine).
Windows Server 2012 offers two different mechanisms to enforce minimum bandwidth:
- through the enhanced packet scheduler in Windows (provides a fine granularity of classification, best choice if many traffic flows require minimum bandwidth enforcement, example would be a server running Hyper-V hosting many virtual machines, where each virtual machine is classified as a traffic flow)
- through network adapters that support Data Center Bridging (supports fewer traffic flows, however, it can classify network traffic that doesnt originate from the networking stack e.g. can be used with a CNA adapter that supports iSCSI offload, in which iSCSI traffic bypasses the networking stack - because the packet scheduler in the networking stack doesnt process this offloaded traffic, DCB is the only viable choice to enforce minimum bandwidth)
In both cases, network traffic first must be classified. Windows classifies a packet itself or gives instructions to a network adapter to classify it. The result of classification is a number of traffic flows in Windows, and a given packet can belong to only one of them.
Details here: http://bit.ly/Ueb07L
|
|
Traffic Monitoring
Details
|
Yes (virtio), Mem Balloon optimization and error messages
The virtio-balloon driver allows guests to express to the hypervisor how much memory they require. The balloon driver allows the host to efficiently allocate and memory to the guest and allow free memory to be allocated to other guests and processes. Guests using the balloon driver can mark sections of the guests RAM as not in use (balloon inflation). The hypervisor can free the memory and use the memory for other host processes or other guests on that host. When the guest requires the freed memory again, the hypervisor can reallocate RAM to the guest (balloon deflation).
This includes:
- Memory balloon optimization (Users can now enable virtio-balloon for memory optimization on clusters. All virtual machines on cluster level 3.2 and higher includes a balloon device, unless specifically removed. When memory balloon optimization is set, MoM will start ballooning to allow memory overcommitment, with the limitation of the guaranteed memory size on each virtual machine.
- Ballooning error messages (When ballooning is enabled for a cluster, appropriate messages now appear in the Events tab)
|
Yes (DMC)
XenServer 5.6 introduced Dynamic Memory Control (DMC) that enables dynamic reallocation of memory between VMs. This capability is maintained in 6.x.
XenServer DMC (sometimes known as 'dynamic memory optimization', 'memory overcommit' or 'memory ballooning') works by automatically adjusting the memory of running VMs, keeping the amount of memory allocated to each VM between specified minimum and maximum memory values, guaranteeing performance and permitting greater density of VMs per server. Without DMC, when a server is full, starting further VMs will fail with 'out of memory' errors: to reduce the existing VM memory allocation and make room for more VMs you must edit each VMs memory allocation and then reboot the VM. With DMC enabled, even when the server is full, XenServer will attempt to reclaim memory by automatically reducing the current memory allocation of running VMs within their defined memory ranges.
|
Yes (port mirroring); enhanced tracing
(no major updates with WS 2019)The Hyper-V Extensible Switch in server 2012 provides port mirroring, which allows an administrator to designate which virtual ports should be monitored and to which virtual port the monitored traffic should be delivered for further processing. For example, a security monitoring virtual machine can look for anomalous patterns in the traffic flowing through other specific virtual machines on the switch. In addition, an administrator can diagnose network connectivity issues by monitoring traffic bound for a particular virtual switch port.
New with WS 2012 R2 is a streamlined and more detailed network tracing capability - network traces now contain switch and port configuration information, and tracing packets through the Hyper-V Virtual Switch and any forwarding extensions you have installed are easier to use and read.
Using the new Microsoft Message Analyzer (not just R2) you can also mirror and capture network traffic for remote and local viewing.
|
|
|
Hypervisor
|
|
|
|
|
|
|
General |
|
|
Hypervisor Details/Size
Details
|
No limit stated
The RHEV Hypervisor is not limited by a fixed technology (or marketed) restriction. Red Hat lists no Limit for the maximum ratio of virtual CPUs per host.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/chap-Overcommitting_with_KVM.html
In reality the specifications of the underlying hardware, nature of the workload in the vm and the overall restriction of 240 logical CPUs per host will determine the limit. RHEV has been publicly demonstrated (see SPECvirt) to run over 550 VMs with a mix of SMP vCPU VMs.
|
XenServer 6.5: Xen 4.4 -based
NEW
XenServer is based on the open-source Xen hypervisor (XenServer 6.5 now runs on the Xen 4.4 hypervisor, provides GPT support and a smaller, more scalable Dom0). XenServer automatically scales the amount of memory allocated to the Control Domain (Dom0) based on the physical memory available.
XenServer uses paravirtualization and hardware-assisted virtualization, requiring either a modified guest OSs or hardware assisted CPUs (more commonly seen as its less restrictive and hardware assisted CPUs like Intel VT/AMD-V have become standard). The device drivers are provided through a 64-bit Linux-based guest (CentOS) running in a control virtual machine (Dom0). The Xen hypervisor itself is small (e.g. the download of Xen 3.4.3 including tools is 10MB). The disk footprint of XenServer varies (recommended 16GB min) mainly due to the size of the Dom0 OS (e.g. XenServer 5.6 creates 2x4GB partitions by default).
|
Hyper-V 4
Hyper-V is Microsofts proprietary hypervisor. Since Server 2012 / 2012 R2 it can be installed as role in a full Windows OS, the stripped down Windows OS Server-Core mode or the standalone Hyper-V Server.
The Hypervisor itself is only
|
|
|
|
Host Config |
|
|
Max Consolidation Ratio
Details
|
x86: 288
PPC: 192
The maximum supported number of CPUs with hyper-threading enabled. Depends on the particular version of RHEL-H or RHEV-H. For details please check https://access.redhat.com/articles/rhel-limits
|
1000 VMs per host
Citrix has listed the new maximum consolidation ratios for XenServer 6.5 Service Pack 1 within the published configuration limits document http://bit.ly/1NA7m6h
- Concurrent VMs per host (Windows): 1000
- Concurrent protected VMs per host with HA enabled: 500
Disclaimers are:
- The maximum number of VMs/host supported is dependent on VM workload, system load, and certain environmental factors. Citrix reserves the right to determine what specific environmental factors affect the maximum limit at which a system can function. For systems running over 500 VMs, Citrix recommends allocating 8GB RAM and 8 exclusively pinned vCPUs to dom0, and setting the OVS flow-eviction-threshold to 8192.
The maximum amount of logical physical processors supported differs by CPU. Please consult the XenServer Hardware Compatibility List for more details on the maximum amount of logical cores supported per vendor and CPU.
Each plugged VBD or plugged VIF or Windows VM reduces this number by 1
|
1024 vims/host, 2048 vCPUs/host
(no major updates with WS 2019)
Server 2012 supports a maximum of 1024 powered-on vims per host and a maximum of 2048 virtual CPUs per host (in case of running virtual machines with multiple vCPUs the limit that is reached first applies).
Actually achievable maximum number depends obviously on workload characteristics and hardware configuration.
|
|
|
unlimited
there is no license restriction for the max number of cores per CPU
|
160 (logical)
XenServer 6.5 supports up to 160 Logical CPUs (threads) e.g. 4 socket with 8 cores each and Hyper-Threading enabled = 64 logical CPUs
|
512 Logical CPUs
(no major updates with WS 2019)
In Windows Server 2016, the support of logial processors is extended to 512.
|
|
Max Cores per CPU
Details
|
x86: 4TB
PPC: 2TB
Max amount of physical RAM installed in host and recognized by RHEV is 4TB (4000GB) for x86 hosts and 2TB (2000GB) for PPC hosts.
|
unlimited
The XenServer license does not restrict the number of cores per CPU
|
unlimited
(no major updates with WS 2019)
Number of cores per CPU (socket) are not restricted with the Hyper-V/Hyper-V Server license
|
|
Max Memory / Host
Details
|
240 vCPUs per VM
The maximum supported number of virtual CPUs per vm (please note that the actual number depends on the type of the guest operating system).
|
1TB
XenServer supports a maximum of 1TB per host, if a host has one or more 32-bit VMs running then a maximum of 128 GB RAM is supported on the host
|
24TB
Hyper-V in Server 2016 and further supports up to 24TB of physical RAM per host.
Hyper-V in Server 2016 supports up to 24TB of physical RAM per host
|
|
|
|
VM Config |
|
|
|
4TB
Maximum amount of configured virtual RAM for an individual vm is 4TB (4000GB).
|
16 (Win) / 32(Linux)
You can only specify up to 16 vCPUs using the XenCenter GUI (you can increase this for Linux guests using the command line e.g. xe vm-param-set VCPUs-max=32 uuid=...). Actual numbers vary with the guest OS version (e.g. license restrictions), see http://bit.ly/1ELdzpv for details.
|
up to 240 vCPU (Win) / 240 vCPU (Linux)
Maximum number of virtual CPUs configurable in a Windows and Linux virtual machine - achievable numbers vary greatly with specific guest OS version, please check! (240 vCPUs supported for e.g.: Windows Server 2012, 2008R2, CentOS 5.7. 5.8, 6.0-6.3, RHEL 6-6.3, SLES11SP2 etc.).
For all supported guests with Server 2016 see https://technet.microsoft.com/en-us/windows-server-docs/compute/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows
|
|
|
Yes
NEW
RHEV exposes serial console access through ssh. You can configure host UNIX domain sockets or named pipes to be attached to a virtual machine serial ports, using hooks.
|
192GB
NEW
A maximum of 192GB is supported for guest Oss, actual number varies greatly with guest OS version so please check for specific guest support.
The maximum amount of physical memory addressable by your operating system varies. Setting the memory to a level greater than the operating system supported limit, may lead to performance issues within your guest. Some 32-bit Windows operating systems can support more than 4 GB of RAM through use of the physical address extension (PAE) mode. The limit for 32-bit PV Virtual Machines is 64GB. Please consult your guest operating system Administrators Guide and the XenServer Virtual Machine Users Guide for more details. Details here: http://bit.ly/14DILov
|
12TB
Maximum virtual RAM supported per virtual machine is 12TB.
|
|
|
Yes
NEW
You can pass-through any host USB devices directly to a virtual machine. You can also use the SPICE protocol capabilities to redirect USB device from a client computer to a virtual machine.
|
No
You can not configure serial ports (as virtual hardware) for your VM
|
yes (named pipe)
(no major updates with WS 2019)
virtual machine serial port (max 2) can natively only be connected to a named pipe (i.e. not to the host physical port or network unless using 3rd party options)
|
|
|
Yes
NEW
RHEV has the ability to hot-add network interface cards, virtual disk storage, vCPUs and memory. Hot unplug is not support currently for vCPUs and memory, but it is planned for future versions. Support is dependent on guest OS support.
|
No (except mass storage)
XenServer doesn’t natively support USB passthrough of anything but mass storage devices.
|
Yes
By connecting the USB Device to the host and configuring a physical hard disk, the USB device can be mounted inside the virtual machine in passthrough mode.
Be aware that Hyper-V in Windows Server 2016 can redirect local resources - including USB devices - to a virtual machine session through Virtual Machine Connection tool. The enhanced session mode connection uses a Remote Desktop Connection session via the virtual machine bus (VMBus), so no network connection to the virtual machine is required.
The other (native) exception is provided for virtual desktop workloads where you can use the capabilities of the RemoteFX protocol to redirect USB devices to virtual machines.
There are however several 3rd party utilities that allow you to redirect USB over IP and provide connectivity to vims in this way. Another work-around is to connect a local USB device to the vm while connecting with the standard RDP client (mstsc.exe) and selecting to connect your local USB device through the Local devices and resources option.
|
|
|
No
no GPU acceleration available
|
Yes (disk, NIC)
XenServer supports adding of disks and network adapters while the VM is running - hot plug requires the specific guest OS to support these functions - please check for specific support with your OS vendor.
|
Yes (except CPU)
(No major change in WS2019)Server 2012 / R2 Hyper-V allows to hot-add memory to the virtual machine as long as the memory is configured as dynamic, essentially allowing to adjust the memory while the vm is up and running. Server 2016 allowed to hot-add static memory and network adapters to the running VMs.
Since 2008R2 Hyper-V allows to add virtual storage (only when connected to virtual SCSI adapter - not IDE) is supported while the vm is running.
Windows Server 2016 enables to hot add/remove virtual NIC and static memories. Windows Server 2016 doesnt support the hot add vCPU. However guest applications as SQL Server need to be restarted when a vCPU is added to the VM; even on VMware
|
|
Graphic Acceleration
Details
|
DAS, iSCSI, NFS, GlusterFS, FC, POSIX;
Virtio SCSI support
RHEV storage is logically grouped into storage pools, which are comprised of three types of storage domains: data (vm and snapshots) , export (temporary storage repository that is used to copy and move images between data centers and RHEV instances), and ISO.
The data storage domain is the only one required by each data center and exclusive to a single data center. Export and ISO domains are optional, but require NFS or POSIX.
Storage domains are shared resources and can be implemented using NFS, GlusterFS, POSIX, iSCSI or the Fibre Channel Protocol (FCP). On NFS, all virtual disks, templates, and snapshots are simple files. On SAN (iSCSI/FCP), block devices are aggregated into a logical entity called a Volume Group (VG). This is done using the Logical Volume Manager (LVM) and presents high performance I/O.
Luns can be directly attached to VMs as disks, but some feature are not supported when this option is used like snapshotting.
Local storage can be used to create non-shared local datacenter which allow a single host.
|
DAS, SAS, iSCSI, NAS, FC, FCoE
XenServer data stores are called Storage Repositories (SRs), they support IDE, SATA, SCSI (physical HBA as well as SW initiator) and SAS drives locally connected, and iSCSI, NFS, SAS and Fibre Channel remotely connected.
Background: The SR and VDI abstractions allow advanced storage features such as Thin Provisioning, VDI snapshots, and fast cloning to be exposed on storage targets that support them. For storage subsystems that do not inherently support advanced operations directly, a software stack is provided based on Microsofts Virtual Hard Disk (VHD)
specification which implements these features.
SRs are storage targets containing virtual disk images (VDIs). SR commands provide operations for creating, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain.
Reference XenServer 6.5.0 Administrators Guide: http://bit.ly/1E2HvQ7
Also refer to the XenServer Hardware Compatibility List (HCL) for more details.
|
Yes (Discrete Device Assignement)
(No major change in WS2019)Windows Server 2016 enables to pass through the graphical card to the VM for performance. 60 FPS can easily be reached for graphical process. The feature is called Device Discrete Assignment. Thanks to this feature, CAO can be now done in VM
|
|
|
|
Memory |
|
|
Dynamic / Over-Commit
Details
|
Yes (KSM)
Kernel SamePage Merging (KSM) reduces references to memory pages from multiple identical pages to a single page reference. This helps with optimization for memory density.
|
No
XenServer does not feature any transparent page sharing algorithm.
|
Yes - Dynamic Memory (Linux guest support)
(no major updates with WS2019) SP1 introduced Dynamic Memory for Hyper-V. Dynamic Memory is a feature that lets Hyper-V balance memory automatically among running virtual machines. This feature adjusts the amount of memory available to the virtual machines in response to the needs of the virtual machine, based on values that you specify resulting in higher virtual machine consolidation ratios.
New in WS 2012 R2 is the support for Linux guest - supported Linux operating systems with updated integration services take advantage of Dynamic Memory the same way as virtual machines running Windows Server.
In Server 2012 / R2 Dynamic Memory has the following configuration item, minimum memory that lets Hyper-V reclaim the unused memory from the virtual machines (resulting in increased virtual machine consolidation numbers, especially in VDI environments where many vims can be idle).
Server 2012 also introduced Hyper-V Smart Paging - if a virtual machine has a smaller amount of memory than its startup memory and it is restarted, Hyper-V needs additional memory to restart the machine. Due to host memory pressure or virtual machines states, Hyper-V may not always have additional memory available potentially causing sporadic virtual machine restarts. Smart paging uses disk resources as additional, temporary memory when more memory is required to restart a virtual machine.
Additionally with Server 2012 / R2, you can change the maximum memory amount and apply that change while the virtual machine is running.
|
|
Memory Page Sharing
Details
|
Yes
RHEV has a feature called transparent huge pages, where the Linux kernel dynamically creates large memory pages (2MB versus 4KB) for virtual machines, improving performance for VMs that require them (newer OSs generations tend to benefit less from larger pages)
|
No
There is no support for large memory pages in XenServer
|
No
(no major updates with WS 2019)
There is no page sharing technology in Hyper-V. Microsoft points out that page sharing becomes less efficient with larger memory pages as it is less likely to find common pages. Also newer OSs tend to reduce zero pages through other mechanisms (e.g. SuperFetch). That means that page sharing tends to be more efficient with legacy guest OSs, less efficient with newer ones.
|
|
|
Yes
RHEV supports Intel EPT and AMD-RVI
|
Yes
Yes, XenServer supports Intel EPT and AMD-RVI, see http://bit.ly/1FYAJKl
|
Yes
(no major updates with WS 2019)
Yes, supports 2MB pages for guests and hypervisor memory management (i.e. virtualized workloads can still benefit even if the are not large page aware)
|
|
HW Memory Translation
Details
|
Yes
The Red Hat Enterprise Virtualization Manager interface allows you to import and export virtual machines (and templates) stored in Open Virtual Machine Format (OVF).
This feature can be used in multiple ways:
- Moving virtual resources between Red Hat Enterprise Virtualization environments.
- Move virtual machines and templates between data centers in a single Red Hat Enterprise Virtualization environment.
- Backing up virtual machines and templates.
|
Yes, incl. vApp
XenServer 6 introduced the ability to create multi-VM and boot sequenced virtual appliances (vApps) that integrate with Integrated Site Recovery and High Availability. vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard. There is full support for VM disk and OVF appliance imports directly from XenCenter with the ability to change VM parameters (virtual processor, virtual memory, virtual interfaces, and target storage repository) with the Import wizard. Full OVF import support for XenServer, XenConvert and VMware.
|
Yes (SLAT)
(no major updates with WS 2019)
Yes, Second Level Address Translation (SLAT) leverages AMD RVI and Intel EPT technology to reduce the overhead incurred during virtual to physical address mapping performed for virtual machines therefore reducing the complexity of the Windows hypervisor and the context switches needed to manage virtual machine page faults.
|
|
|
|
Interoperability |
|
|
|
Comprehensive
RHEV takes advantage of the native hardware certification of the Redhat Enterprise Linux OS. The RHEV Hypervisor (RHEV-H) is certified for use with all hardware which has passed Red Hat Enterprise Linux certification except where noted in the Requirements chapter of the installation guide http://red.ht/1hOZEDA
|
Improving
XenServer has an improving HCL featuring the major vendors and technologies but compared to e.g. VMware and Microsoft the list is somewhat limited - so check support first. Links to XenServer HCL and XenServer hardware verification test kits are here: http://hcl.xensource.com/
|
Yes (OVF Import/Export)
A new extension to VMM is the OVF Import/Export tool that consists of cmdlets that enable VMM 2012 users to import and export virtual machines that are packaged in the OVF format. You can use the OVF Import/Export tool to import a virtual machine from other virtualization platforms (currently VMware vCenter and Citrix XenServer) or to export a virtual machine for use on another platform.
The OVF format uses an XML file with the extension .ovf together with one or more virtual disks. The OVF Import/Export tool does not convert virtual hard disk file formats. You may need third-party tools to convert a virtual hard disk format.
|
|
|
Limited
NEW
RHEV supports the most common server and desktop OSs as well as PPC guests, current support includes:
For X86_64 hosts:
- Microsoft Windows 10, Tier 1, 32\64 bit
- Microsoft Windows 7, Tier 1, 32\64 bit
- Microsoft Windows 8, Tier 1, 32\64 bit
- Microsoft Windows 8.1, Tier 1, 32\64 bit
- Microsoft Windows Server 2008, Tier 1, 32\64 bit
- Microsoft Windows Server 2008 R2, Tier 1, 64 bit
- Microsoft Windows Server 2012, Tier 1, 64 bit
- Microsoft Windows Server 2012 R2, Tier 1, 64 bit
- Red Hat Enterprise Linux 3, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 4, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 5, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 6, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 7, Tier 1, 32\64 bit
- SUSE Linux Enterprise Server 10, Tier 2, 32\64 bit
- SUSE Linux Enterprise Server 11, Tier 2, 32\64 bit
- SUSE Linux Enterprise Server 12, Tier 2, 32\64 bit
For PPC hosts:
- Red Hat Enterprise Linux 7, Tier 1, LE\BE
- Red Hat Enterprise Linux 6, Tier 1, BE
- SUSE Linux Enterprise Server 12, Teir 2, LE
- SUSE Linux Enterprise Server 11 SP4, Teir 2, BE
|
Good
XenServer 6.5 Service Pack 1 support for:
- Microsoft Windows 10
- Microsoft Windows Server 2012
- Ubuntu 14.14
- SUSE Linux Enterprise Server 11 SP3, SLED 11SP3, SLES 12
- Scientific Linux 5.11, 6.6, 7.0, 7.1
- Red Hat Enterprise Linux (RHEL) 5.10, 5.11, 6.5, 6.6, 7.0, 7.1
- Oracle Enterprise Linux (OEL) 5.10, 5.11, 6.5, 6.6, 7.0, 7.1
- Oracle UEK 6.5
- CentOS 5.10, 5.11, 6.5, 6.5, 7.0, 7.1
- Debian 7.2 'Jessie'
- VSS support for Windows Server 2008R2 has been improved and reintroduced
Refer to the XenServer 6.5 Service Pack 1 Virtual Machine User Guide for details: http://bit.ly/1JdLV4R
|
Strong Windows Ecosystem
As it is build on the Windows 2019 Server Operating System, Hyper-V can utilize the existing (vast) Windows Operating System ecosystem and qualification structures. Microsoft focusses on the extensibility of Server 2019 and Hyper-V through this existing ecosystem as System Center or Microsoft Azure.
|
|
|
|
Yes (SDK, API, PowerShell)
The Software Development Kit provides the architectural overview of the APIs and use of SDK tools provided: http://bit.ly/13eawJ2
The XenServer 6.2 management API is documented in detail here: http://bit.ly/13eaocy
|
Closing the gap
Guest Support for Hyper-V (including free Hyper-V Server) 2016 and WS 2016 has clearly improved but there is still a gap compared to VMwares support for older or less mainstream operating systems.
For specific levels/version check (Software Requirements section) : https://technet.microsoft.com/en-us/windows-server-docs/compute/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows
|
|
Container Support
Details
|
REST API, Python CLI, Hooks, SDK
RHEV exposes several interfaces for interacting with the virtualization environment. These interfaces are in addition to the user interfaces provided by the
Red Hat Enterprise Virtualization Manager Administration, User, and Reports Portals. Some of the interfaces are supported only for read access or only when it has been explicitly requested by Red Hat Support.
Supported Interfaces (Read and Write Access):
- Representational State Transfer (REST) API: With the release of RHEV-3 Red Hat introduced a new Representational State Transfer (REST) API. The REST API is useful for developers and administrators who aim to integrate the functionality of a Red Hat Enterprise Virtualization environment with custom scripts or external applications that access the API via standard HTTP. The REST API exposed by the Red Hat Enterprise Virtualization Manager is a fully supported interface for interacting with Red Hat Enterprise Virtualization Manager.
- Python Software Development Kit (SDK): This SDK provides Python libraries for interacting with the REST API. The Python SDK provided by the rhevm-sdk-python package is a fully supported interface for interacting with Red Hat Enterprise Virtualization Manager.
- Java Software Development Kit (SDK): This SDK provides Java libraries for interacting with the REST API. The Java SDK provided by the rhevm-sdk-java package is a fully supported interface for interacting with Red Hat Enterprise Virtualization Manager.
- Linux Command Line Shell: The command line shell provided by the rhevm-cli package is a fully supported interface for interacting with the Red Hat Enterprise Virtualization Manager.
- VDSM Hooks: The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on Red Hat Enterprise Linux virtualization hosts. The use of VDSM Hooks on virtualization hosts running Red Hat Enterprise Virtualization Hypervisor is not currently supported.
Additional Supported Interfaces (Read Access)
Use of these interfaces for write access is not supported unless explicitly requested by Red Hat Support:
- Red Hat Enterprise Virtualization Manager History Database
- Libvirt on Virtualization Hosts
Unsupported Interfaces
Direct interaction with these interfaces is not supported unless your use of them is explicitly requested by Red Hat Support:
- The vdsClient Command
- Red Hat Enterprise Virtualization Hypervisor Console
- Red Hat Enterprise Virtualization Manager Database
|
|
Yes (Windows containers, Hyper-V Containers, Kubernetes - NEW)
NEW
Hyper-V under Windows Server 2016 supports containers based on docker. Two kinds of containers exists:
· Windows container: these containers share the Operating System. Only the application dependencies are inside the container (eg: IP stack, .NET, application itself and all change made on the OS)
· Hyper-V container: these containers are located on top of a virtualization layer to increase the security level for multi-tenant environment. The libraries and the binaries are located in the container to avoid interference between the containers
Since Windows Server 2019, Kubernetes containers can be hosted.
The Windows Server 2019 Standard license support only two Hyper-V containers on the same host. For unlimited Hyper-V containers, you require Windows Server 2019 Datacenter.
|
|
|
REST API
RHEV provides its RESTful API for external integration into cloud platforms, for example, ManageIQs cloud management interface.
|
CloudStack APIs, support for AWS API
Citrix CloudPlatform uses a RESTful CloudStack API. In addition to supporting the CloudStack API, CloudPlatform supports the Amazon Web Services (AWS) API. Future cloud API standards from bodies such as the Distributed Management Task Force (DMTF) will be implemented as they become available.
Details on the CloudPlatform API here: http://bit.ly/13eaDo0
|
Yes (WMI API, PowerShell)
(no major updates with WS 2019)You can use PowerShell for scripting and interface with the published WMI Providers.
In Windows Server 2016, Windows PowerShell 5.0 delivers over 5,000 cmdlets to enable you to manage server roles and automate management tasks.
For new features in PowerShell see: http://bit.ly/1ayOSQh
|
|
|
RHCI: CloudForms, OpenStack, RHEV; Satelite, OpenShift (Fee-Based Add-Ons)
NEW
Comment: Due to the variation in actual cloud requirements, different deployment model (private, public, hybrid) and use cases (IaaS, PaaS etc.) the matrix will only list the available products and capabilities. It will not list evaluations (green, amber, red) rather than providing the information that will help you to evaluate it for YOUR environment.
Overview:
IaaS (private and hybrid)
Red Hat offers Red Hat Cloud Infrastructure (RHCI) -a single-subscription offering that bundles and integrates the following products:
- RHEV - Datacenter virtualization hypervisor and management for traditional (ENTERPRISE) workloads
- Cloud-enabled Workloads: RHEL OpenStack - scalable, fault-tolerant platform for developing a managed private or public cloud for CLOUD-ENABLED workloads
- Red Hat CloudForms - Cloud MANAGEMENT and ORCHESTRATION across multiple hypervisors and public cloud providers
- Red Hat Satellite - A system management platform that provides lifecycle management for Red Hat
Enterprise Linux for both host and tenant operating systems within Red Hat Cloud Infrastructure.
This includes provisioning, configuration management, software management, and subscription
management. - http://red.ht/1oKMZsP
PaaS:
Red Hat also offers OpenShift (PaaS), as on-promise technology as well as available as online (public cloud) offering by Red Hat. Details here - http://red.ht/1LRn7ol
There are a number of public and hybrid (on-premise or cloud) offerings that Red Hat positions as complementary like Red Hat Storage Server (scale-out storage servers both on-premise and in the Amazon Web Services public cloud). Details are here: http://red.ht/1ug7XTY
|
CloudPlatform
Note: Due to the variation in actual cloud requirements, different deployment model (private, public, hybrid) and use cases (IaaS, PaaS etc) the matrix will only list the available products and capabilities. It will not list evaluations (green, amber, red) rather than providing the information that will help you to evaluate it for YOUR environment.
After the acquisition of Cloud.com in July 2011, Citrix has centered its cloud capabilities around its CloudPlatform suite.
Citrix CloudPlatform (latest release 4.3.0.2) powered by Apache CloudStack - is an open source cloud computing platform that pools computing resources to build public, private, and hybrid Infrastructure as a Service (IaaS) clouds. CloudPlatform manages the network, storage, and compute nodes that make up a cloud infrastructure. Use CloudPlatform to deploy, manage, and configure cloud computing environments.
|
Service Provider Foundation API, Azure Service Management API
(no major updates with WS 2019)In addition to PowerShell / WMI based automation and scripting capabilities MS introduced specific cloud APIs for Service Providers with the Service Provider Foundation and an extended Service Management API with the Windows Azure Pack. The Virtual Machines service of the Windows Azure Pack builds on the Service Provider Foundation (SPF) API provided with System Center 2012 to enable self-service IaaS.
The Service Provider Foundation (SPF) API is an extensible OData REST API in System Center 2012 that enables hosters to integrate their System Center installation into their customer portal and is automatically integrated with customers on-premises installation of AppController. A simple exchange of credentials enables enterprises to add the Service Provider cloud to App Controller for consumption alongside private and public cloud resources. SPF also has multi-tenancy built-in enabling operation at massive scale, controlling multiple scale-units built around Virtual Machine Manager.
The Windows Azure Pack provides a solution for enterprises looking to act as service providers and service providers that want to attract enterprise workloads.
It runs on top of Windows Server and System Center and aims to provide the capabilities of Windows Azure in your datacenter, enabling you to offer a self-service, multi-tenant cloud with Windows Azure-consistent experiences and services.
The Windows Azure Pack is essentially a collection of Windows Azure technologies that install in enterprise and service provider datacenters, integrating with their existing System Center and Windows Server environments.
The Management Portal of the Azure Pack replicates the Windows Azure Developer portal experience found in Windows Azure, along with a subset of the services available in Windows Azure. The capabilities available in the Management Portal can be accessed programmatically through the Service Management API, an OData/REST API. This accessibility enables you to completely replace the portal, for example, if a service provider has their own portal, which they want to integrate with Azure services.
Details here: http://bit.ly/YuRVQl
|
|
|
Extensions
|
|
|
|
|
|
|
Cloud |
|
|
|
VDI Included in RHEV; HTML5 support (Tech Preview)
There is one single SKU for RHEV that includes server and desktop virtualization.
Red Hats Enterprise Virtualization includes an integrated connection broker as well as the ability to manage VDI users via external (LDAP-based) directory services. The same interface is used to manage both server and desktop images (unlike most other solutions like e.g. VMware View, Citrix XenDesktop).
Please note that VDI is an additional charge to the server product and cannot be purchased separately (i.e. without purchasing RHEV for servers).
Red Hat Enterprise Virtualization for Desktops (RHEV-D) consists of:
- Red Hat Hypervisor
- Red Hat Enterprise Virtualization Manager (RHEV-M) as centralized management console with management tools that administrators can use to create, monitor, and maintain their virtual desktops (same interface as for server management)
- SPICE (Simple Protocol for Independent Computing Environments) - remote rendering protocol. There is initial support for the SPICE-HTML5 console client is offered as a technology preview. This feature allows users to connect to a SPICE console from their browser using the SPICE-HTML5 client.
- Integrated connection broker-a web-based portal from which end-users can log into their virtual desktops
Note: VDI related capabilities are NOT listed as Fee-Based Add-Ons (no purchase of additional VDI management software is required or licenses involved to enable the VDI management capability).
However, you will require relevant client access licensing to run virtual machines with Windows OSs, see http://bit.ly/1cBdgAm for details
|
Citrix Desktop Virtualization (XenDesktop & XenApp 7.6 - NEW; ViaB; associated products)
Citrix is by many perceived to own the most comprehensive portfolio for desktop virtualization alongside with the largest overall market share.
Citrixs success in this space is historically based on its Terminal Services-like capabilities (Hosted SHARED Desktops i.e. XenApp aka Presentation Server) but Citrix has over time added VDI (Hosted Virtual Desktops), mobility management (XenMobile), networking (NetScaler), cloud for Service Providers hosting desktop/apps (CloudPlatform) and other comprehensive capabilities to its portfolio (separate fee-based offerings).
Citrixs FlexCast approach promotes the any type of virtual desktop to any device philosophy and facilitates the use of different delivery technologies (e.g. VDI, application publishing or streaming, client hypervisor etc.) depending on the respective use case (so not a one fits all approach).
XenDesktop 7.x history:
- Citrixs announcement of Project Avalon in 2011 promised the integration of a unified desktop / application virtualization capability into its CloudPlatform product. This was then broken up into the Excalibur Project (unifying XenDesktop and XenApp in the XenDesktop 7.x product) and the Merlin Release aiming to provide multi-tenant clouds to manage virtual desktops and applications.
- XenDesktop 7.1 added support for Windows Server 2012 R2 and Windows 8.1, and new Studio configuration of server-based graphical processing units (GPUs) considered an essential hardware component for supporting virtualized delivery of 3D professional graphics applications.
- In Jan 2014 Citrix announced that XenApp is back as product name, rather than using XenDesktop to refer to VDI as well as desktop/application publishing capabilities, also see http://gtnr.it/14KYg4b
- With XenDesktop 7.5 Citrix announced the capability to provision application and or desktop workloads to public and or private cloud infrastructures (Citrix CloudPlatform, Amazon and (later) Windows Azure. Wake-on-LAN capability has been added to Remote PC Access and AppDNA is now included in the product.
-With XenDesktop 7.6 includes new features like: session prelaunch and session linger; support for unauthenticated (anonymous) users and connection leasing makes recently used applications and desktops available even when the Site database in unavailable.
VDI in a Box: Citrix also has VDI in a Box (ViaB) offering (originating in the Kaviza acquisition) - a simple to deploy, easy to manage and scale VDI solution targeting smaller deployments and limited use cases.
In reality ViaB box scales to larger (thousaunds of users) environments but has (due to its simplified nature and product positioning) restricted use cases compared to the full XenDesktop (There is no direct migration path between ViaB and XenDesktop). ViaB can for instance not provide advanced Hosted Shared Desktops (VDI only), no advanced graphics capabilities (HDX3DPro), has limited HA for fully persistent desktops, no inherent multi-site management capabilities.
Overview here: http://bit.ly/1fXeA38
Recommended Read for VDI Comparison (Ruben Spruijts VDI Smackdown): http://www.pqr.com/downloadformulier?file=VDI_Smackdown.pdf
|
Cloud OS: System Center, Hyper-V, Azure
NEW
Comment: Due to the variation in individual cloud requirements, different deployment model (private, public, hybrid) and use cases (IaaS, PaaS etc) the matrix will only list the available products and capabilities. It will not list evaluations (green, amber, red) rather than providing the information that will help you to evaluate it for YOUR environment. Use the Analyse & Print Report function to perform a custom evaluation.
Comment End
Microsoft continues to promote its Cloud OS vision, that aims to provide customers with one consistent platform for infrastructure, applications, and data - spaning across customer datacenters, hosting service provider datacenters, and the Microsoft public cloud.
In simplified terms Microsoft wants to provide a common management layer with the components of System Center 2016 that manages not only your private cloud (hosted e.g.on WS 2016) but also public cloud instances (based on Azure) or any hybrid scenarios where you combine on-premise and off-premise cloud resources.
Private Cloud:
The core components of Microsofts private cloud are provided by Windows Server (e.g. Hyper-V provided by 2016) and System Center 2016 for the management of this environment. (For details see: Private Cloud)
In 2017, Microsoft will deliver Azure Stack which is Microsoft Azure in your datacenter. It will not use SYstem Center has a management point. Instead, Azure Stack will use SDC, SDS and SDN management point (eg: network controller, storage spaces direct and Hyper-v). Azure Stack will be sold in CPS format)
Public Cloud:
Microsoft Azure is Microsofts public cloud computing platform (separate offering - not included in Server 2016 or System Center!). It primarily started out as a PaaS environment used to built applications with different programming languages, tools and frameworks through a network of Microsoft-managed datacenters. Today Windows Azure provides various services for both Platform as a Service (PaaS), Infrastructure as a Service (IaaS) services and a range of associated services. (For details see: Public Cloud)
|
|
|
|
Desktop Virtualization |
|
|
|
Yes
This is possible via local datacenter feature, but limited to a single host with reduced management features.
|
no
There is no specific Storage Virtualization appliance capability other than the abstraction of storage resources through the hypervisor.
|
Remote Desktop Services (included in Server 2019)
vGPU (Remote FX)
Passthrough GPU (Device Discrete Assignment)
(no major updates with WS 2019)Microsoft introduced VDI capability (in addition to the existing Terminal Services capabilities) as part of the Remote Desktop Services (RDS) of the Server 2008 R2 operating system.
RDS provides hosted sessions (aka terminal services) as well as virtual desktop functionality (VDI).
With RDS in Windows Server 2012 / R2 , Microsoft has enhanced VDI to provide a more capable single infrastructure to enable the above mentioned virtual and session-based desktops as well as RemoteApp programs - building on the features that were available with 2008 R2.
Note: RDS related capabilities are NOT listed as Fee-Based Add-Ons (no purchase of additional VDI management software is required or licenses involved to enable the RDS management capability).
However, you will require relevant client access licensing to run either sessions or virtual machines (e.g. VDAs, RDS CALs), see http://bit.ly/1cBdgAm for details
There have been a large RDS related enhancements in WS 2012 R2:
User and admin experience:
- Session Shadowing (shadow a session-based or virtual machine-based desktop or RemoteApp program e.g. for helpdesk and troubleshooting)
- Single server RDS deployment including Active Directory (RD Connection Broker role service on the same physical instance as an Active Directory Domain Controller)
- Improved RemoteApp behavior- transparency, live thumbnails, and seamless application move that allows the application content to remain visible while the application is moved on screen
- Quick reconnect for remote desktop clients (reconnect in less than 10 seconds)
- Dynamic display handling - display changes on the client to be automatically reflected on the remote client (seamless device rotation, monitor addition and removal for both remote sessions and RemoteApp programs).
- RemoteFX virtualized GPU supports DX11.1
Reducing Network and Storage Requirements:
- Improved compression and bandwidth usage - using codecs that enable better compression & deliver bandwidth savings (e.g. video content delivery over a WAN utilizes up to 50% less bandwidth compared to WS 2012)
- RemoteFX Codec improvements further reduce bandwidth for non-video content
- Ability to offload all progressive decode processing to AVC/H.264 hardware if available, enabling client implementations on lower-end CPU devices, with better user experience and lower bandwidth than in RDP 8.
- RemoteFX Media Streaming has up to 50% reduced bandwidth compared to WS 2012.
- Online Storage Deduplication - deduplication now works with online vhds (running desktops) on SMB3 storage (for VDI only)
- The new Tiering feature in Storage Spaces allows an SSD tier for frequently accessed data, and a HDD tier for less-frequently accessed data. Storage Spaces transparently moves data at a sub-file level between the two tiers based on how frequently data is accessed. Tiering is frequently used in virtual desktop environments where different data tiers require different storage perfromance.
- Write-back cache: Storage Spaces in WS 2012 R2 supports creating a write-back cache that uses a small amount of space on existing SSDs in the pool to buffer small random writes (predominant in VDI environments)
In WIndows Server 2016, Hyper-V is able to provide a GPU core directly to the VM (passthrough). This enables to get an application running 60 FPS inside a VM. This feature is called Discrete Device Assignement.
A VM supports now the 4K resolution, a VDI VM can be a gen 2, H.264/AVC codec investment, OpenGL & OpenCL API Support.
|
|
|
|
1 |
|
|
|
3rd Party
At this time RHEV focuses on the management of the virtual and cloud infrastructure.
Partner solutions offer insight into the services running on top of the virtual infrastructure, with integration into RHEV.
|
Vendor Add-On: XenDesktop
EdgeSight, Director
The performance monitoring and trouble shooting aspect of Application Management in the context of Citrix is mostly applicable to XenDesktop 7 Director and XenDesktop 7 EdgeSight
- Desktop Director is a real-time web tool, used primarily for the Help Desk agents. In XenDesktop 7, Directors new troubleshooting dashboard provides the real-time health monitoring of your XenDesktop 7 site.
- In XenDesktop 7, EdgeSight provides two key features, performance management and network analysis.
With XenDesktop 7, Director (the real-time assessment and troubleshooting tool is included in all XenDesktop 7 editions.
The new EdgeSight features are included in both XenApp and XenDesktop Platinum editions entitlements however these features are based on the XenDesktop 7 platform. The environment must be XenDesktop 7 in order to leverage the new Director and EdgeSight features.
EdgeSight network analysis also requires NetScaler Enterprise or Platinum edition. With NetScaler Enterprise, real-time data for the last 60 minutes is provided. NetScaler Platinum edition has an unlimited data retention. To summarize,
How do you get it?
With XenDesktop 7, Director is included in all XenDesktop 7 editions. The new EdgeSight features are included in both XenApp and XenDesktop Platinum editions entitlements however these features are based on the XenDesktop 7 platform.
- All editions: Director - real-time monitoring and basic troubleshooting (up to 7 days of data)
- XD7 Platinum: EdgeSight performance management feature - includes #1 + historical monitoring (up to a full year of data through the monitoring SQL database)
- XD7 Platinum + NetScaler Enterprise: EdgeSight performance management and network analysis - includes #2 plus 60 mins. of network data
- XD7 Platinum + NetScaler Platinum: EdgeSight performance management and network analysis - includes #2 plus unlimited network data
http://bit.ly/17toPr8
Citrix EdgeSight is a performance and availability management suite for XenApp, Presentation Server, XenDesktop and endpoint systems (through agents running on physical systems or virtualized platforms). Citrix EdgeSight monitors applications, sessions, devices, and the network in real time. Details here: http://support.citrix.com/article/CTX124092
EdgeSight for NetScaler is an agent-less solution which provides real-time user performance monitoring specifically for web applications based upon actual user experience (response time). It provides both real-time and historical views to proactively identify potential problems. (Citrix NetScaler is an application switch - a physical or virtual appliance - that intelligently distributes, optimizes, and secures network traffic for Web applications. Features include load balancing, compression, Secure Sockets Layer (SSL) offload, a built-in application firewall, and dynamic content caching.) Details here: http://support.citrix.com/article/CTX121310
|
Improved (Storage Spaces); Storage Spaces Direct
(no major updates with WS 2019)Windows Server 2012 introduced the concept of Storage Spaces (supported for Hyper-V and free Hyper-V Server environments). In simplified terms Storage Spaces allows you to use inexpensive SAS/SATA drives (JBODS) to provide more advanced pooled and redundant storage through a Storage Virtualization layer. First you create a storage pool specifying the physical disk drives to be used and subsequently you create virtual disks (not to be confused with vhd for virtual machines) within the pool. These disks are appearing then to the host like new physical disks with changed attributes (resilience, capacity etc.) by adding mirroring or parity to the disk and pools can be dynamically expanded. These can now be used for local storage, failover clustering and SMB Direct (RDMA).
WS 2012 R2 introduced significant enhancements to Storage Spaces:
1. Tiered Storage
Ability to to provide tiered storage (that automatically moves frequently accessed data to faster (SSD) storage and infrequently accessed data to slower (HDD) storage).
Microsoft has developped Storage Spaces direct in Windows Server 2016 that enables to use DAS storage device to create highly available storage system. This works as Storage Spaces but by using local DAS storage device. This enables to create hyperconverged solution based on Microsoft technologies. For more information, you can read the following topic: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview
- To create a storage space with storage tiers, the storage pool must have a sufficient number of hard disks and SSDs to support the selected storage layout, and the disks must contain enough free space.
- When creating a storage space using the New Virtual Disk Wizard or the New-VirtualDisk cmdlet you can now specify to create the virtual disk with storage tiers.
- The virtual disk must use fixed provisioning, and the number of columns will be identical on both tiers (a four-column, two-way mirror with storage tiers would require eight SSDs and eight HDDs).
- Volumes created on virtual disks that use storage tiers should be the same size as the virtual disk.
- Administrators can pin (assign) files to the standard (HDD) or faster (SSD) tier by using the Set-FileStorageTier cmdlet, ensuring that the files are always accessed from the appropriate tier.
2. Storage Spaces Write- Back Caching:
Storage Spaces can use existing SSDs in the storage pool to create a write-back cache that buffers small random writes to SSDs before later writing them to HDDs.
- write-back cache is transparent to administrators and users and is created on all new virtual disks as long as there is a sufficient number of SSDs in the storage pool (Simple spaces: one SSD, Two-way mirror spaces and single-parity spaces: two SSDs, three-way mirror spaces and dual parity spaces: three SSDs)
- write-back cache works with all types of storage spaces, including storage spaces with storage tiers
- newly created storage spaces automatically use a 1 GB write-back cache (as long as the storage pool contains enough disks with MediaType set to SSD or Usage set to Journal)
3. Parity space support for failover clusters, dual parity and automatic rebuild from storage pool free space (rather than hot spares)
- You can now use parity spaces to maximize capacity and resiliency while still offering the ability to fail over to another cluster node
- Dual parity enables you to keep a high level of resiliency when using a parity space with a large number of disks or any time when you need to help protect against two simultaneous disk failures
- When a disk fails, instead of writing a copy of the data that was on the failed disk to a single hot spare, the data is copied to multiple drives in the pool such that the previous level of resiliency is achieved
Please note: Storage spaces can be provided from a single file server node using (non-raided!) SAS or SATA JBODs. For clustered nodes you will need SAS JBODs (not SATA) in a SHARED JBOD enclosure (Storage Spaces does NOT present virtualized local storage as shared storage like e.g. VMware VSA). MS will list certified JBOD enclosures.
Advantages:
- Provide reliable and scalable storage with reduced cost (cheap JBODs)
- Aggregate individual drives into storage pools
- Expand storage pools on demand
- Deploy specific drives as hot spares
Limitations:
- Fibre-channel and iSCSI are not supported
- SAS or SATA JBODs only, no HW Raid, for clustering SAS only and a shared SAS (no SATA) JBOD enclosure
- A mirrored pool must have at least 2 drives, three drive minimum required for using parity
- Virtual disks to be used with a failover cluster must use the NTFS file system not ReFS, cant use thin provisioning and must use the mirrored (not parity) option.
- Performance considerations (software RAID, JBODs of different types, multiple virtual disks per pool etc.)
Storage Spaces FAQ here: http://bit.ly/UxZ1FP
|
|
|
|
2 |
|
|
Application Management
Details
|
sVirt & Security Partnerships
RHEV includes sVirt, a technology included in Red Hat Enterprise Linux 6 that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using virtual machines. The main reasons for integrating these technologies are to improve security and harden the system against bugs in the hypervisor that might be used as an attack vector aimed toward the host or to another virtual machine. To learn more about sVirt, visit this link: http://red.ht/1oZPMkf. Also Red Hats RHEV partner ecosystem has security partnerships with SourceFire, Catbird, and other security-focused products and solutions.
|
Vendor Add-Ons: NetScaler Gateway, App Firewall, CloudBridge
NetScaler provides various (network) security related capabilities through e.g.
- NetScaler Gateway: secure application and data access for Citrix XenApp, Citrix XenDesktop and Citrix XenMobile)
- NetScaler AppFirewall: secures web applications, prevents inadvertent or intentional disclosure of confidential information and aids in compliance with information security regulations such as PCI-DSS. AppFirewall is available as a standalone security appliance or as a fully integrated module of the NetScaler application delivery solution and is included with Citrix NetScaler, Platinum Edition.
Details here: http://bit.ly/17ttmKk
CloudBridge:
Initially marketed under NetScaler CloudBridge, Citrix CloudBridge provides a unified platform that connects and accelerates applications, and optimizes bandwidth utilization across public cloud and private networks.
CloudBridge encrypts the connection between the enterprise premises and the cloud provider so that all data in transit is secure.
http://bit.ly/17ttSYA
|
Yes (SC Operations Manager 2019, Azure Monitor);
SC 2012 R2 Operations Manager enables you to monitor services, devices, and operations from a single console. Azure Monitor can also monitor your infrastructure from Microsoft Cloud
Numerous views show the state, health, and performance information, as well as alerts generated for availability, performance, configuration and security situations.
Specifically for virtualization and cloud aspects you can connect System Center 2012 Virtual Machine Manager (VMM) with Operations Manager to monitor the health and availability of the virtual machines and virtual machine hosts that VMM manages. You can also monitor health and availability of the VMM management server, the VMM database server, library servers, and VMM Self-Service Portal web servers, and see diagram views of the virtualized environment through the Operations console in Operations Manager.
A close integration between System Center 2012 R2 Virtual Machine Manager and System Center 2012 R2 Operations Manager introduces System Center cloud monitoring of virtual layers for private cloud environments. To get this new functionality, use the System Center 2012 Management Pack for System Center 2012 R2 Virtual Machine Manager Dashboard, which is imported automatically when you integrate Operations Manager and Virtual Machine Manager.
With System Center 2012 R2 management packs are also updated with new metrics for chargeback purposes that are based both on allocation and utilization. This provides better integration with chargeback and reporting, and enables monitoring of tenant-based utilization of resources that allows chargeback and billing.
|
|
|
|
3 |
|
|
|
Vendor Add-On: CloudForms
Red Hat Cloudforms - http://red.ht/1ldORtw (separate product) or sold as part of Red Hat Cloud Infrastructure (RHCI) provides enterprises operational management tools including monitoring, chargeback, governance, and orchestration across virtual and cloud infrastructure such as Red Hat Enterprise Virtualization, Amazon Web Services, Microsoft and VMware and OpenStack.
The CloudForms Management Engine enables context aware, model-driven automation and orchestration of administrative, operational, and self-service activities in enterprise cloud environments. Automation can be driven across a wide spectrum of scenarios including discovery, state changes, performance and trending, event-based, scheduled, via web-services integration or on-demand through an extensible web-based management portal.
|
Workflow Studio (incl.)
Workflow Studio is included with this license and provides a graphical interface for workflow composition in order to reduce scripting. Workflow Studio allows administrators to tie technology components together via workflows. Workflow Studio is built on top of Windows® PowerShell and Windows Workflow Foundation. It natively supports Citrix products including XenApp, XenDesktop, XenServer and NetScaler.
Available as component of XenDesktop suite, Workflow Studio was retired in XenDesktop 7.x
|
Azure Monitor, Azure Sentinel, Azure Security Center
(No major change in WS 2019) Microsofts security infrastructure plus wider Windows ecosystem (3rd party solutions)
There have been a number of security related updates on Windows Server and System Center with the 2012 R2 release (see security section in the management category)
Microsoft Forefront is a suite of security related products that deliver end-to-end security and access to information through an integrated line of protection, access, and identity management products.
Main products are:
- Forefront Endpoint Protection 2010
- Forefront Protection Suite
- Forefront Identity Manager 2010
- Forefront Protection 2010 for Exchange Server
- Forefront Protection 2010 for SharePoint
- Forefront Threat Management Gateway 2010
- Forefront Unified Access Gateway 2010
For features and details see: http://bit.ly/1aZL5s4
|
|
|
|
4 |
|
|
Workflow / Orchestration
Details
|
No - See Details
There is no natively provided Site Failover capability in RHEV. Red Hat does provide the tools needed to provide a disaster recovery solution.
This is possible via 3rd party partners integration (such as Veritas, Acronis, SEP, Commvault).
|
Integrated Site Recovery (incl.)
XenServer 6 introduced the Integrated Site Recovery (maintained in 6.5), utilizing the native remote data replication between physical storage arrays and automates the recovery and failback capabilities. The new approach removes the Windows VM requirement for the StorageLink Gateway components and it works now with any iSCSI or Hardware HBA storage repository (rather only the restricted storage options with StorageLink support). You can perform failover, failback and test failover. You can configure and operate it using the Disaster Recovery option in XenCenter. Please note however that Site Recovery does NOT interact with the Storage array, so you will have to e.g. manually break mirror relationships before failing over. You will need to ensure that the virtual disks as well as the pool meta data (containing all the configuration data required to recreate your vims and vApps) are correctly replicated to your secondary site.
Note: Citrix strongly recommends using the XenServer 6.x Disaster Recovery feature, as the legacy Metadata Backup, Restore and Update mechanism (accessible via the XenServer host console) will be deprecated in a future XenServer release. Citrix advises customers using the legacy mechanism to migrate to the new, integrated feature. [CA-65906]
|
Yes (SC Orchestrator / SC Service Mgr 2019, Azure Automation), Included
(No major change in SC 2019) System Center 2012 / R2 includes the Orchestrator product (Opalis acquisition). Orchestrator provides a workflow management solution that lets you automate the creation, monitoring, and deployment of resources in your environment.
Essentially Orchestrator provides the glue between the various System Center and other infrastructure components and allows automated interaction between the components through workflow automation (runbooks). This enables you to automate any process in your environment through a drag-and-drop interface by linking activities into runbooks.
Orchestrator includes many built-in standard activities and you can expand Orchestrators functionality and ability to integrate with other Microsoft and third-party products by installing integration packs.
Integration packs for Orchestrator contain additional activities that extend the functionality of Orchestrator. Details here: http://bit.ly/Un9Quf
Updated in Orchestrator in SC 2012 R2 are the Windows Azure Integration Pack for Orchestrator in System Center 2012 SP1 and System Center 2012 R2 and the System Center Integration Pack for System Center 2012 Virtual Machine Manager.
Orchestrator also provides extensible integration to any system through the Orchestrator Integration Toolkit. You can create custom integrations that allow Orchestrator to connect to any environment.
System Center 2012 / R2 Service Manager provides an integrated platform for automating your organizations IT service management best practices, such as those found in the Microsoft Operations Framework (MOF) and Information Technology Infrastructure Library (ITIL). It provides built-in processes for incident and problem resolution, change control, and asset lifecycle management.
You will often find a direct integration between e.g. Service Manager and Orchestrator where e.g. a service request is handled / approved by a Service Manager workflow and subsequently triggers an Orchestrator runbook e.g. a user could request a virtual machine or service (service template) from the self service portal in Service Manager integrated with an Orchestrator runbook that triggers the virtual machine deployment through Virtual Machine Manager.
With Windows Sever 2012, Microsoft introduced PowerShell 3 which provides a wide range of Cmdlets to manage almost all possible configuration and management tasks in the operating system. Essentially all Windows roles and features can be managed using PowerShell.
|
|
|
|
5 |
|
|
|
Vendor Add-On: CloudForms
NEW
RHEV includes enterprise reporting capabilities through a comprehensive management history database, which any reporting application utilizes to generate a range of reports at data center, cluster and host levels.
For charge-back, RHEV has -party integrated solutions like e.g. IBMs Tivoli Usage and Accounting Manager (TUAM) which can convert the metrics RHEVs enterprise reports provide into fiscal chargeback numbers.
Red Hat Cloudforms fee add-on or sold as part of Red Hat Cloud Infrastructure (RHCI) provides enterprises operational management tools including monitoring, chargeback, governance, and orchestration across virtual and cloud infrastructure such as Red Hat Enterprise Virtualization, Amazon Web Services, Microsoft and VMware and OpenStack and provides the capability for cost allocation with usage and chargeback by determining who is using which resources to allocate costs, create and implement chargeback models.
|
Vendor Add-On: CloudPortal Business Manager, CloudStack Usage Server (Fee-Based Add-Ons)
Citrixs CloudPortal product line offers CloudPortal Business Manager - a business operations suite that works in conjunction with IAAS clouds running on Citrix CloudPlatform allowing to meter and automate billing and payment processing for services consumed within your cloud.
http://bit.ly/1w9309R
- Citrixs CloudStack contains usage server a separately-installed component that provides aggregated usage records which can be used to create billing integration for CloudStack metrics.
- Also , the workload balancing engine with XenServer 5.6 FP1 introduced support for simple chargeback and reporting. The chargeback report includes, amongst other data, the following: the name of the VM, and uptime as well as usage for storage, CPU, memory and network reads/writes. You can use the Chargeback Utilization Analysis report to determine what percentage of a resource (such as a physical server) a specific department within your organization used.
|
Hyper-V Replica (included) , Storage Replica, Azure Site Recovery or 3rd party Fee-Based Add Ons
Disaster recovery capability has been enhanced through the introduction of Hyper-V Replica - an asynchronous replication of virtual machines over a network link from one Hyper-V host at a primary site to another Hyper-V host at a replica site. In the event of failure at the primary site, administrators can manually fail over production virtual machines to the Hyper-V server at the recovery site.
During failover, virtual machines are brought back to a consistent point in time, and they can be accessed by the rest of the network. Minimum interval of replication is 5 minutes (so there is the usual data loss associated with asynchronous replication) .
With WS 2012R2 you can also configure extended replication where the Replica server forwards information about changes that occur on the primary virtual machines to a third server for additional protection. In addition the frequency of replication, which previously was a fixed value, is now configurable. You can also access recovery points for 24 hours. Previous versions had access to recovery points for only 15 hours.
Hyper-V Replica Pros:
- Affordable DR solution (included in Server 2012, no extra licenses required)
- Does not require expensive SAN based replication
- Test Failover without impact on production vims
Cons:
- Manual failover and recovery actions
- Aimed at smaller scale deployments
- no integration with SAN array replication (like vSphere SRM), no synchronous replication
You can of course use (independent of Hyper-V replica) manual array based replication mechanisms to replicate virtual machines on LUN level between sides. This would either be a pretty manual process or requires (fee-based) third party solution with varying levels of integration and complexity.
In Windows Server 2016, we can leverage Storage Replica to replicate block-to-block from one volume to another. Its work for SAN or Storage Spaces (Direct) solution. So you can replicate the volume from one room to another. It can be part of your recovery plan. The replication can be synchronous or asynchronous depending of the latence of the link between both rooms.
Microsoft provides also Azure Site Recovery to replicate VM from one room to Azure or to another room. This technology is based on Hyper-V Replica.
|
|
|
|
6 |
|
|
|
Vendor Add-Ons: Load Balancer, High Performance
- Red Hat has networking related products like the Load-Balancer Add-On for RHEL (http://www.redhat.com/f/pdf/rhel/RHEL6_Add-ons_datasheet.pdf) and the RHEL High-Performance Network Add-On (delivers remote direct memory access - RDMA- over Converged Ethernet - RoCE) that can add value to virtualization and cloud environments.
- Common SDN solution integration is available with Neutron integration on 3.3 (tech preview)
- Cisco UCS (VM-FEX) integration
|
NetScaler Gateway, AppFirewall, Branch Repeater, 1000v, CloudBridge (Fee-Based Add-Ons)
Citrix has been greatly expanding its network related capabilities. They sit typically under the umbrella of the NetScaler family but frequent product name changes and repackaging makes it often difficult to keep track.
- NetScaler for SDN: unifying L4-7 network services into an application control layer and integrating this application control with existing transport networks and emerging software defined network (SDN) technologies
- NetScaler Gateway (formerly Access Gateway): secure (VPN) application and data access for Citrix XenApp, Citrix XenDesktop and Citrix XenMobile
- NetScaler Branch Repeater: WAN optimization solution that accelerates, controls and optimizes desktops, applications, multimedia for branch and mobile users (Branch Repeater is merging with the Citrix CloudBridge product family)
- NetScaler AppFirewall: secures web applications, prevents inadvertent or intentional disclosure of confidential information and aids in compliance with information security regulations such as PCI-DSS. AppFirewall is available as a standalone security appliance or as a fully integrated module of the NetScaler application delivery solution and is included with Citrix NetScaler, Platinum Edition.
- NetScaler 1000V: NetScaler is engineered to seamlessly and uniquely integrate with the core Nexus 7000 and the virtual Nexus 1000V fabrics.
NetScaler is available as physical appliance and software-based virtual appliances in a range of editions:
- NetScaler MPX appliances are physical network appliances that offer up to 120 Gbps performance.
- NetScaler SDX is a high-density consolidation platform that combines Xen-based virtualization with the architecture of NetScaler MPX to run up to 40 NetScaler instances simultaneously
- NetScaler VPX virtual appliances run as virtual machines (VMs)
Editions available include Standard, Enterprise and Platinum.
http://bit.ly/17twoy5
|
Hyper-V Resource Metering and Service Manager (included), Health Service
(No major updates in Windows Server 2016 except when Storage Spaces Direct is enable. In this case we can leverage Health Service to gather cluster information such as CPU average usage, IOPS, capacity and so on)
Hyper-V in Windows Server 2012 introduced Resource Metering, a feature that allows customers to create usage-based billing solutions.
MS provides 2 mechanisms to collect this data (without System Center): Hyper-V cmdlets in Windows PowerShell and the new APIs in the Virtualization WMI provider. These metrics are exposed: Average CPU usage, Average physical memory usage, Minimum memory usage, Maximum memory usage, Maximum amount of disk space allocated to a virtual machine, Total incoming network traffic, Total outgoing network traffic.
Microsoft has extended the Resource Metering feature in Hyper-V 2012 R2 to include additional metering objects.
The actual chargeback function can either be provided by the one of the many 3rd party tools or now also by System Center 2016 Service Manager. Service Manager works in conjunction with Operations Manager and VMM. Price sheets, or rate cards, are created in Service Manager for Virtual Machine Manager clouds. The clouds are then assigned to prices sheets. For example, you might have price sheets that are associated to silver and bronze clouds in order to differentiate levels of service.
|
|
|
|
7 |
|
|
Network Extensions
Details
|
NSX (Vendor Add-on)
VMware NSX is the network virtualization platform for the Software-Defined Data Center.
http://www.vmware.com/products/nsx.html
NSX embeds networking and security functionality that is typically handled in hardware directly into the hypervisor. The NSX network virtualization platform fundamentally transforms the data center’s network operational model like server virtualization did 10 years ago, and is helping thousands of customers realize the full potential of an SDDC.
With NSX, you can reproduce in software your entire networking environment. NSX provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.
|
|
Hyper-V Extensible Switch - 3rd party Extensions (Nexus 1000V - Fee-Based Add On)
(No major updates in WS 2016) The Hyper-V Virtual Switch enables ISVs to create extensible plug-ins (Virtual Switch Extensions) that can provide enhanced networking and security capabilities.
Deatils on Ciscos 1000v for Hyper-V here: http://bit.ly/1aZTXOk
New in 2012 R2 is the ability to give third party extensions visibility into both Hyper-V Network Virtualization address spaces.
The HNV module was moved to inside the virtual switch so that extensions could see both the provider (PA) and virtual (CA) IP address spaces. This allows forwarding and other types of extensions to make decisions with knowledge of both address spaces.
The second change was to implement hybrid forwarding. Hybrid forwarding directs packets to different forwarding agents, based upon the packet type. In the Windows Server 2012 R2 implementation a packet that is NVGRE is forwarded by the HNV module. A packet that is non-NVGRE is forwarded as normal by the forwarding extension. Regardless of which agent performs the forwarding computation, the forwarding extension still has the opportunity to apply additional policies to the packet. If there is no forwarding extension, the Microsoft forwarding logic takes over for non-NVGRE packets.
Details here: http://bit.ly/1aZTz2p
|
|