Product : Scale Computing, HC3 [HCI]/8.6.5, x86
Feature : Dedup/Compr. Process, Efficiency, Data Services
Content Owner:  Herman Rutten
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).

The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.

The Scale Computing HC3 data deduplication feature is considered a post-process implementation that works with existing background processes to identify duplicate 1 MiB blocks of data on a given physical disk. The process leverages the SCRIBE metadata reference count mechanism by finding independently written blocks that are the same. This duplicate review is for each physical disk on a given node to ensure as little a footprint as possible while providing all of the benefits of full deduplication.

The deduplication process is broken into two steps. The first step reviews VM data blocks by creating a hash index of each block and storing the hash in the nodes RAM. The hashing algorithm will be able to scan the system data for deduplication candidates at roughly 1 MiB/s of data on HDDs and 4 MiB/s of data on SSDs, both of these estimates per node. The second process occurs during low system utilization. The system will work through the queue of hashed blocks in RAM. It will search for matching hashes until the background disk scan regenerates them. When the process finds two blocks with a matching hash it will verify the underlying blocks are in fact duplicates before incrementing the reference count in metadata on the block it is planning to free. Updating the metadata count for the block essentially releases the space of the duplicate block. The block then returns to the system’s free storage pool. This secondary process can progress much faster than 1 MiB/s; the speed is dependent on the current system load.

The SCRIBE metadata reference count mechanism is the same architecture utilized by snapshots and clones in SCRIBE to allow quick, efficient, low-impact thin-provisioning on the HC3 system. Shared blocks are referenced and a count to the block stored in the metadata.

SCRIBE = Scale Computing Reliable Independent Block Engine