|
|
|
Storage |
|
|
Supported Storage
Details
|
DAS, SAS, iSCSI, NAS, SMB, FC, FCoE, openFCoE
XenServer data stores are called Storage Repositories (SRs), they support IDE, SATA, SCSI (physical HBA as well as SW initiator) and SAS drives locally connected, and iSCSI, NFS, SMB, SAS and Fibre Channel remotely connected.
Background: The SR and VDI abstractions allow advanced storage features such as Thin Provisioning, VDI snapshots, and fast cloning to be exposed on storage targets that support them. For storage subsystems that do not inherently support advanced operations directly, a software stack is provided based on Microsofts Virtual Hard Disk (VHD)
specification which implements these features.
SRs are storage targets containing virtual disk images (VDIs). SR commands provide operations for creating, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain.
Reference XenServer 7.5 Administrators Guide: http://bit.ly/2rHyEE6
Also refer to the XenServer Hardware Compatibility List (HCL) for more details.
|
|
Yes
Support for NAS: Network Attached Storage, FC: Fibre Channel, iSCSI, FCoE. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
Yes (limited for SAS)
Dynamic multipathing support is available for Fibre Channel and iSCSI storage arrays (round robin is default balancing mode). XenServer also supports the LSI Multi-Path Proxy Driver (MPP) for the Redundant Disk Array Controller (RDAC) - by default this driver is disabled. Multipathing to SAS based SANs is not enabled, changes must typically be made to XenServer as SAS drives do not readily appear as emulated SCSI LUNs.
|
|
Yes
Multipath included. In addition to that Oracle offers the Connect Storage Plug-in to simplify Storage Management via Oracle VM Manager or EM12c Cloud Control. More info on: http://docs.oracle.com/cd/E50245_01/index.html and http://www.oracle.com/us/technologies/virtualization/ovm3-storage-connect-459309.pdf
|
|
Shared File System
Details
|
Yes (SR)
XenServer uses the concept of Storage Repositories (disk containers/data stores). These SRs can be shared between hosts or dedicated to particular hosts. Shared storage is pooled between multiple hosts within a defined resource pool. All hosts in a single resource pool must have at least one shared SR in common. NAS, iSCSI (Software initiator and HBA are both supported), SAS, or FC are supported for shared storage.
|
|
Yes
OCFS2 included. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
Yes (iSCSI, FC, FCoE)
XenServer 7.x adds Software-boot-from-iSCSI for Cisco UCS
Yes for XenServer 6.1 and later (XenServer 5.6 SP1 added support for boot from SAN with multi-pathing support for Fibre Channel and iSCSI HBAs)
Note: Rolling Pool Upgrade should not be used with Boot from SAN environments. For more information on upgrading boot from SAN environments see Appendix B of the XenServer 7.5 Installation Guide: http://bit.ly/2sK4wGn
|
|
Yes
Yes for SAN Boot with FC. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
No
While there are several (unofficial) approaches documented, officially no flash drives are supported as boot media for XenServer 7.x.
|
|
Yes
Technically is possible but not officialy supported by Oracle. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
Virtual Disk Format
Details
|
vhd, raw disk (LUN)
XenServer supports file based vhd (NAS, local), block device-based vhd (FC, SAS, iSCSI) using a Logical Volume Manager on the Storage Repository or a full LUN (raw disk)
|
|
Raw Images (*.img files)
If a disk is created from OVM Manager it is created by default as a Raw image and if it is imported it is converted from the original format to Raw Images. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
2TB (16TB with GFS2)
NEW
For XenServer 7.5 the maximum virtual disk sizes are:
- NFS: 2TB minus 4GB
- LVM (block): 2TB minus 4 GB
Reference: http://bit.ly/2rcrjZx
Experimental support for GFS2 Storage Repositories in XenServer 7.5, effectevely increased the maximum size of virtual disks from 2TB to 16TB
|
|
10 TB (Virtual Disk on OCFS2), Maximum supported on the Guest OS (Raw Disks)
For NFS there is no limit specified since it depends on the Backend Local Filesystem of the NFS Server. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
Thin Disk Provisioning
Details
|
Yes (Limitations on block)
XenServer supports 3 different types of storage repositories (SR)
1) File based vhd, which is a local ext3 or remote NFS filesystems, which supports thin provisioning for vhd
2) Block device-based vhd format (SAN based on FC, iSCSI, SAS) , which has no support for thin provisioning of the virtual disk but supports thin provisioning for snapshots
3) LUN based raw format - a full LUN is mapped as virtual disk image (VDI) so to is only applicable if the storage array HW supports that functionality
|
|
Yes
Oracle provides the ability to create Sparse Disks and Thin Cloning for efficient disks usage based on the actual usage in addition to Non-Sparse Disks for full allocation. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
No
There is no NPIV support for XenServer
|
|
No
|
|
|
Yes - Clone on boot, clone, PVS, MCS
XenServer 6.2 introduced Clone on Boot
This feature supports Machine Creation Services (MCS) which is shipped as part of XenDesktop. Clone on boot allows rapid deployment of hundreds of transient desktop images from a single source, with the images being automatically destroyed and their disk space freed on exit.
General cloning capabilities: When cloning VMs based off a single VHD template, each child VM forms a chain where new changes are written to the new VM, and old blocks are directly read from the parent template. When this is done with a file based vhd (NFS) then the clone is thin provisioned. Chains up to a depth of 30 are supported but beware performance implications.
Comment: Citrixs desktop virtualization solution (XenDesktop) provides two additional technologies that use image sharing approaches:
- Provisioning Services (PVS) provides a (network) streaming technology that allows images to be provisioned from a single shared-disk image. Details: http://bit.ly/2d0FrQp
- With Machine Creation Services (MCS) all desktops in a catalog will be created off a Master Image. When creating the catalog you select the Master and choose if you want to deploy a pooled or dedicated desktop catalog from this image.
Note that neither PVS (for virtual machines) or MCS are included in the base XenServer license.
|
|
Yes
This technology is called Thin Cloning and is available on OCFS2. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
SW Storage Replication
Details
|
No
There is no integrated (software-based) storage replication capability available within XenServer
|
|
No
|
|
|
IntelliCache
XenServer 6.5 introduced a read caching feature that uses host memory in the new 64-bit Dom0 to reduce IOPS on storage networks, improve LoginVSI scores with VMs booting up to 3x Faster. The read cache feature is available for XenDesktop & XenApp Platinum users who have an entitlement to this feature.
Within XenServer 7.0 LoginVSI scores of 585 have been attained (Knowledge Worker workload on Login VS 4.1)
IntelliCache is a XenServer feature that can (only!) be used in a XenDesktop deployment to cache temporary and non-persistent operating-system data on the local XenServer host. It is of particular benefit when many Virtual Machines (VMs) all share a common OS image. The load on the storage array is reduced and performance is enhanced. In addition, network traffic to and from shared storage is reduced as the local storage caches the master image from shared storage.
IntelliCache works by caching data from a VMs parent VDI in local storage on the VM host. This local cache is then populated as data is read from the parent VDI. When many VMs share a common parent VDI (for example by all being based on a particular master image), the data pulled into the cache by a read from one VM can be used by another VM. This means that further access to the master image on shared storage is not required.
Reference: http://bit.ly/2d8Bxpm
|
|
No
|
|
|
No
There is no specific Storage Virtualization appliance capability other than the abstraction of storage resources through the hypervisor.
|
|
No
|
|
Storage Integration (API)
Details
|
Integrated StorageLink (deprecated)
Integrated StorageLink was retired in XenServer 6.2.
Background: XenServer 6 introduced Integrated StorageLink Capabilities. It replaces the StorageLink Gateway technology used in previous editions and removes the requirement to run a VM with the StorageLink components. It provides access to use existing storage array-based features such as data replication, de-duplication, snapshot and cloning. Citrix StorageLink allows to integrate with existing storage systems, gives a common user interface across vendors and talks the language of the storage array i.e. exposes the native feature set in the storage array. StorageLink also provides a set of open APIs that link XenServer and Hyper-V environments to third party backup solutions and enterprise management frameworks. There is a limited HCL for StorageLink supporting arrays. Details: http://bit.ly/2DvcCXY
|
|
Yes
Oracle offers the Storace Connect Plug-in that simplify storage management like LUN creation, removal, resize and other features directly by the use of Oracle VM Manager. More info on: http://www.oracle.com/us/technologies/virtualization/ovm3-storage-connect-459309.pdf
|
|
|
Basic
Virtual disks on block-based SRs (e.g. FC, iSCSI) have an optional I/O priority Quality of Service (QoS) setting. This setting can be applied to existing virtual disks using the xe CLI.
Note: Bare in mind that QoS setting are applied to virtual disks accessing the LUN from the same host. QoS is not applied across hosts in the pool!
|
|
No
|
|
|
|
Networking |
|
|
Advanced Network Switch
Details
|
Yes (Open vSwitch) - vSwitch Controller
The Open vSwitch is fully supported
The vSwitch brings visibility, security, and control to XenServer virtualized network environments. It consists of a virtualization-aware switch (the vSwitch) running on each XenServer and the vSwitch Controller, a centralized server that manages and coordinates the behavior of each individual vSwitch to provide the appearance of a single vSwitch.
The vSwitch Controller supports fine-grained security policies to control the flow of traffic sent to and from a VM and provides detailed visibility into the behavior and performance of all traffic sent in the virtual network environment. A vSwitch greatly simplifies IT administration within virtualized networking environments, as all VM configuration and statistics remain bound to the VM even if it migrates from one physical host in the resource pool to another.
Details in the XenServer 7.3 vSwitch Controller Users Guide: http://bit.ly/2r1WuaS
|
|
Yes
Support for Centralized vSwitch included. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
Yes (incl. LACP - New)
XenServer 6.1 added the following functionality, maintained with XenServer 6.5 and later:
- Link Aggregation Control Protocol (LACP) support: enables the use of industry-standard network bonding features to provide fault-tolerance and load balancing of network traffic.
- Source Load Balancing (SLB) improvements: allows up to 4 NICs to be used in an active-active bond. This improves total network throughput and increases fault tolerance in the event of hardware failures. The SLB balancing algorithm has been modified to reduce load on switches in large deployments.
Background:
XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:
• LACP bonding is only available for the vSwitch, while active-active and active-passive are available for both the vSwitch and Linux bridge.
• When the vSwitch is the network stack, you can bond either two, three, or four NICs.
• When the Linux bridge is the network stack, you can only bond two NICs.
XenServer 6.1 provides three different types of bonds, all of which can be configured using either the CLI or XenCenter:
• Active/Active mode, with VM traffic balanced between the bonded NICs.
• Active/Passive mode, where only one NIC actively carries traffic.
• LACP Link Aggregation, in which active and stand-by NICs are negotiated between the switch and the server.
Reference: http://bit.ly/2d8uA7C
|
|
Yes
Support for Active/Backup Bonding, Load Balancer Bonding, LACP Bonding. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
Yes (limited for mgmt. and storage traffic)
NEW
VLANs are supported with XenServer. To use VLANs with virtual machines use switch ports configured as 802.1Q VLAN trunk ports in combination with the XenServer VLAN feature to connect guest virtual network interfaces (VIFs) to specific VLANs (you can create new virtual networks with XenCenter and specify the VLLAN IDs).
Â
 XenServer 6.1 removed a previous limitation which caused VM deployment delays when large numbers of VLANs were in use. This improvement enables administrators using XenServer 6.x and later to deploy hundreds of VLANs in a XenServer pool quickly.
|
|
Yes
Support for VLAN Interfaces included. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
No
XenServer does not support PVLANs.
Please refer to the Citrix XenServer Design: Designing XenServer Network Configurations guide for details on network design and security considerations http://bit.ly/2n2aWwW
|
|
No
|
|
|
Yes (guests only)
XenServer 6.1 introduced formal support for IPv6 in XenServer guest VMs (maintained with 7.2). Customers already used it with e.g. 6.0 but the 6.1 release notes list this as new official feature: IPv6 Guest Support: enables the use of IPv6 addresses within guests allowing network administrators to plan for network growth.
Full support for IPv6 (i.e. assigning the host itself an IPv6 address) will be addressed in the future.
|
|
Yes
Supported. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
SR-IOV
NEW
Experimental support for SR-IOV-capable networks cards delivers high performance networking for virtual machines, catering for workloads that need this type of direct access.
|
|
Yes
Supported to present Raw Disks directly to VMs. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
Yes
You can set the Maximum Transmission Unit (MTU) for a XenServer network in the New Network wizard or for an existing network in its Properties window. The possible MTU value range is 1500 to 9216.
|
|
Yes
Supported. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
Yes (TSO)
TCP Segmentation Offload can be enabled, see http://bit.ly/13e9WLi
By default, Large Receive Offload (LRO) and Generic Receive Offload (GRO) are disabled on all physical network interfaces. Though unsupported, you can enable it manually http://bit.ly/2djwZiZ
|
|
Yes
Supported. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|
|
|
Yes (outgoing)
QoS of network transmissions can be done either at the vm level (basic) by setting a ks/sec limit for the virtual NIC or on the vSwitch-level (global policies). With the DVS you can select a rate limit (with units), and a burst size (with units). Traffic to all virtual NICs included in this policy level (e.g. you can create vm groups) is limited to the specified rate, with individual bursts limited to the specified number of packets. To prevent inheriting existing enforcement the QoS policy at the VM level should be disabled.
Background:
To limit the amount of outgoing data a VM can send per second, you can set an optional Quality of Service (QoS) value on VM virtual interfaces (VIFs). The setting lets you specify a maximum transmit rate for outgoing packets in kilobytes per second.
The QoS value limits the rate of transmission from the VM. As with many QoS approaches the QoS setting does not limit the amount of data the VM can receive. If such a limit is desired, Citrix recommends limiting the rate of incoming packets higher up in the network (for example, at the switch level).
Depending on networking stack configured in the pool, you can set the Quality of Service (QoS) value on VM virtual interfaces (VIFs) in one of two places-either a) on the vSwitch Controller or b) in XenServer (using theCLI or XenCenter).
|
|
No
|
|
Traffic Monitoring
Details
|
Yes (Port Mirroring)
The XenServer vSwitch has Traffic Mirroring capabilities. The Remote Switched Port Analyzer (RSPAN) policies support mirroring traffic sent or received on
a VIF to a VLAN in order to support traffic monitoring applications. Use the Port Configuration tab in the vSwitch Controller UI to configure policies that apply to the VIF ports.
|
|
Yes
Supported. More info on: http://docs.oracle.com/cd/E50245_01/index.html
|