Interested in Desktop as a Service? See the top rated DaaS Platforms in our new comparison here. 
When we introduced a building block approach to our reference architecture many questions from the wider team revolved around the scaling maximums and limitations of the respective desktop virtualization solution in order to create valid configurations and correctly sized building blocks.

It became quickly apparent that while e.g. in VMware’s case vSphere maximums were well documented, the virtual desktop specific guidelines are scattered around in different documents, some are not listed at all and others (e.g. storage related) are based on 3rd party vendor recommendations rather than limits specified by VMware.

How many systems per “cluster”, how many VMs per replica/base image or LUN, how many broker or management servers per building block ?

So I thought it’d be worthwhile to summarize the (high-level) guidelines and assumptions we used for View, XenDesktop and Verde (verified by the respective vendors) in this post.

In a nut shell, understanding the scaling limitations allows you to “assemble” systems into clusters, building blocks and larger constructs (PODs) by combining them with management and other peripheral components.

I’ve created a few graphics to show you the approach conceptually in the example below.

Figure 1 – Step 1: Assembling hosts into a cluster and adding storage and management components to create a building block for XenDesktop on vSphere

Figure 2 – Step 2: Adding broker and other access components to the building block for user access

Figure 3 – Step 3: Create a larger scale user environment by combining building blocks (10 000 user example for XenDesktop on vSphere)

VMware View 5 and 5.1

General
  • Maximum 8 Hosts per Cluster (including N+1) – limit is caused by View Composer for concurrent file access in conjunction with the VMFS file system (maximum without View Composer is 32)
    • New: 32 hosts per cluster with View 5.1 when using NFS volumes (no VMFS)
  • Each desktop pool can contain max of one ESX/ESXi cluster
  • Up to 16 VM’s per CPU core (recommended, not a hard limit as up to 25 are supported on vSphere)
  • Max 512 VMs per host (documented vSphere maximum)
  • Max 1,000 VMs per View desktop pool
  • Max 2,000 VMs per vCenter (primarily determined by the number of concurrent activities from View components against the vCenter server)
  • Maximum pod size (construct of VMware View Building Blocks): 10 000 users (tested by VMware)
Storage:

Storage maximums with VMware View are less clear defined (mainly best practices have been published)

  • Typically the recommended maximum number is 64-128 linked clones (VMs) per VMFS datastore
  • VAAI storage systems typically allow for numbers higher than 128
    • (the limit without VAAI is primarily driven by the SCSI reservations on the VMFS file system during metadata updates – with VAAI enabled LUNs ESX uses atomic test and set (ATS) algorithm to lock the LUN, reducing the impact greatly)
    • Storage vendors have shown that an NFS based datastore (NFS is not using VMFS) can support up 256 and more linked clones (the actual theoretical maxima of VMFS and NFS for vSphere are higher).
    • Max 64TB per LUN (VMFS), max size of NFS LUNs are often determined by the NFS storage array (check with your vendor)
      • The maximum size listed above is not intended to be a “recommended” number as this will be determined by various factors (performance per LUN, vm sizes, operational concerns like backup etc)
      • Max 1000 VMs per replica and View pool
 Maximum Number of Connections:
  • 1 Connection Server with Direct connection, RDP or PCoIP:  2,000
  • 7 Connection Servers (5+2 spares) Direct connection, RDP or PCoIP:  10,000
  • 1 Connection Server with Tunneled connection, 2,000
  • 1 Connection Server with PCoIP Secure Gateway, 2,000

 

Citrix XenDesktop 5.6, PVS 6.1 and vSphere 5

When utilizing XenDesktop with a VMware vSphere backend infrastructure (managed by VMware vCenter) many scaling considerations are determined by the VMware environment.

General
  • Maximum 8 Hosts per Cluster (including 1 hot spare)
  • Maximum 8 Hosts per Cluster when using Citrix Machine Creation Services (including N+1) – limit is caused by concurrent file access in conjunction with the VMFS file system
    • For NFS this limit does not apply (Citrix support statement pending)
  • Up to 16 VM’s per CPU core (recommended, not a hard limit as up to 25 are supported on vSphere)
  • Max 512 VMs per host (documented vSphere maximum)
  • Max 2,000 VMs per vCenter (primarily determined by the number of concurrent activities from XenDesktop components against the vCenter server)
 Storage:
  • Typically the recommended maximum number is 64-128 XenDesktop Machine Creation Services (MCS) difference disks (or VMs) per VMFS datastore.
  • VAAI storage systems typically allow for numbers higher than 128 VMs
    • (the limit without VAAI is primarily driven by the SCSI reservations on the VMFS file system during metadata updates – with VAAI enabled LUNs ESX uses atomic test and set (ATS) algorithm to lock the LUN, reducing the impact greatly)
    • Storage vendors have shown that an NFS based datastore (NFS datastores are not using VMFS) can support 256 and more linked clones (the actual theoretical maxima of VMFS and NFS for vSphere are higher).
    • Max 1000 VMs per MCS base image (equivalent of View replica disk)
Management Server and Maximum Number of Connections:

The number of supported users depends heavily on the actual load generated by registrations or logins per minute, so the below numbers are only high level guideline)

  • 1 XenDesktop Contoller: Can handle 10 000+ users, the configuration listed in our reference architecture (virtual machine with 4 vCPU, 4GB RAM) should be able to handle 5000+ users
    Always use N+1 servers for redundancy.
  • Provisioning Server: A single virtual server (4 vCPU and 32GB RAM) will support approximately 1000 users
    Always use N+1 servers for redundancy.
  • License Server: A single Citrix License server (2 vCPU, 4GB RAM can issue approximately 170 licenses per second or over 300,000 licenses every 30 minutes.
    Because of this scalability, a single virtual license server with VM level HA can be implemented (if the license server is down a grace period of 30 days is available).
  • A single Web Interface serve has the capacity to handle more than 30,000 connections per hour.
    Two Web Interface servers should always be configured and load balanced through NetScaler to provide redundancy and balance load (NetScaler design is outwith the scope of this paper).

For environments with smaller numbers of users (e.g. <500; actual numbers will depend on user activity) the Web Interface service as well as the license server can reside on the XenDesktop controller instance rather than on a dedicated server.

Citrix NetScalerAccess Gateway: Provides secure remote access and single sign-on capabilities (e.g. from outside the corporate network): A single access gateway can provide 10 000+ concurrent ICA connections and should be deployed in N+1 configurations.

 

 

Virtual Bridges VERDE 6.0

One of the aspects I really like about VERDE is that fact that it’s scaling limitations are actually far simpler to deal with as the product inherently provides an architecture that allows for easy horizontal scaling.

Rather than having to use dedicated management servers (potentially with multiple UIs) each VERDE Server comes with an integrated connection broker, a hypervisor to run VDI sessions, and is managed from a single Management Console.

Servers can be clustered together using Virtual Bridges’ stateless cluster algorithm, in addition the Distributed Connection Brokering architecture eliminates any single choke point and therefore increases the scalability and availability of the VDI solution.

  • Max 10 000 hosts per VERDE cluster
  • Max 1 000 000 VMs per VERDE cluster
  • Max 2000 users per connection broker (typically each hypervisor host will also act as broker so that the alignment of “users per server” will automatically take care of that)
  • Cloud Branch Servers are sized like regular clusters
Storage
  • Shared storage needs to be NFS based
    • Shared storage typically only contains the golden image and the VERDE persistent disks for the VERDE user profile management (plus the small cluster metadata) greatly reducing the shared storage requirements!

Yes, it can be that simple … 😉

PS I have included more details on creating building blocks and the surrounding design consideration in the upcoming Redbook.
Please always check the latest vendor documentations for official (and updated) numbers where available.

Andy

### Archived Article – thanks to Andreas Groth – WhatMatrix Community Affiliate (originally published on Virtualizationmatrix.com) ###

The following two tabs change content below.
Open community of consultants and analysts, providing independent views on IT trends and products - free, trusted analysis from the community, for the community ...

X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR

Free Download

X

You are not logged In - What do you prefer?

I don't want to log in or register

Just send it to my email

I am already registered

Let me login

Let me join you!

Join us
X

Leave your email address

Privacy first! To provide you with this free report, we might share your email with our partners in the selected technology category and might contact you via email. Your data will be handled confidentially and will never be shared with anyone outside of this context.You can contact us any time to review or retract your details. By downloading the report, you consent to the above and our T&C’s.

Accept the T&C's

Thank You

X

Thank you - your downloadable file is ready for download