Product : Microsoft, HyperV/2019, DataCenter
Feature : Storage QoS, Storage, Network and Storage
Content Owner:  Roman Macek
Summary
Distributed Storage QoS
Details
(no major updates with WS2019)WS 2012 R2 introduced a new Storage QoS function that enables you to specify the maximum and minimum I/O loads in terms of I/O operations per second (IOPS) for each virtual disk in your virtual machines (ensuring that the storage throughput of one virtual hard disk does not impact the performance of another virtual hard disk on the same host).
- Specify the maximum IOPS allowed for a virtual hard disk that is associated with a virtual machine.
- Specify the minimum IOPS - receive a notification when the specified minimum IOPS for a virtual hard disk is not met.
- Monitor storage-related metrics through the virtual machine metrics interface.

Details here: http://bit.ly/1ayzAuW

Previous to WS 2012 R2 there was no native method of implementing storage I/O control for disk objects of hosts or virtual machines in Server 2012.
However storage control was possible if the storage is network based (e.g. iSCSI, NFS or SMB) by regulating the network traffic that carries the storage I/O. Please note that control is limited to either the physical network port or the virtual port on the hyper-v switch). As an individual virtual disk that is e.g. stored on SMB is presented as disk object to the virtual machine (not associated with a unique virtual network port) control for individual virtual disk objects is typically not possible.

Background for network based I/O control:
Windows Server 2012 introduces QoS control for physical networks as well as virtual networks (on the port level of the hyper-V virtual switch) with the ability to guarantee a minimum bandwidth to a virtual machine or a service.
With System Center 2012 (min SP1) you can configure some of the new networking settings centrally (but System Center is not required) including Bandwidth settings for virtual machines and IEEE priority tagging for QoS prioritization - PowerShell will be required for the more advanced networking options.
Windows Server 2012 also takes advantage of Data Center Bridging (DCB)-capable hardware to converge multiple types of network traffic on a single network adapter, with a guaranteed level of service to each type.
Unlike in Server 2008 R2 (where only the maximum bandwidth was configurable - without being able to guarantee a minimum bandwidth) Server 2012 introduces a bandwidth floor - the ability to guarantees a certain amount of bandwidth to a specific type of traffic (port or virtual machine).

Windows Server 2012 offers two different mechanisms to enforce minimum bandwidth:
- through the enhanced packet scheduler in Windows (provides a fine granularity of classification, best choice if many traffic flows require minimum bandwidth enforcement, example would be a server running Hyper-V hosting many virtual machines, where each virtual machine is classified as a traffic flow)

- through network adapters that support Data Center Bridging (supports fewer traffic flows, however, it can classify network traffic that doesnt originate from the networking stack e.g. can be used with a CNA adapter that supports iSCSI offload, in which iSCSI traffic bypasses the networking stack - because the packet scheduler in the networking stack doesnt process this offloaded traffic, DCB is the only viable choice to enforce minimum bandwidth)
Another example is Server Message Block Direct (SMB Direct), a Windows Server 2012 feature that builds on Remote Direct Memory Access (RDMA). SMB Direct offloads the SMB traffic directly to an RDMA-capable NIC to reduce latency and the number of CPU cycles that are spent on networking.
Again, you can use Data Center Bridging (implemented by some NIC vendors) in network adapters. DCB works in a similar way as Minimum Bandwidth, each class of traffic, regardless of whether its offloaded or not, has bandwidth allocation; in the event of network congestion, each class gets its share - otherwise, each class gets as much bandwidth as is available.

In both cases, network traffic first must be classified (built-in classifications in Server 2012 include iSCSI, NFS, SMB and SMB Direct). Windows classifies a packet itself or gives instructions to a network adapter to classify it. The result of classification is a number of traffic flows in Windows, and a given packet can belong to only one of them.

Details here: http://bit.ly/Ueb07L

In Windows Server 2016, Microsoft brings the distributed Storage QoS. The Storage QoS policy are stored in the cluster database. They can be applied to a single VM or to multiple VMs. The Storage QoS policy is aware about the underlying storage IOPS and can manage centrally the IOPS distribution. The QoS policy can limit the minimum and the maximum IOPS or/and the bandwith to a virtual disk in bytes per second. https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-qos/storage-qos-overview