Storage backends

Storage backends implement the DiskIo trait, the shared abstraction that all storage frontends use to read and write data. A frontend holds a Disk handle and doesn't know what kind of backend is behind it — the same frontend code works with a local file, a Linux block device, a remote blob, or a layered composition of multiple backends.

Backend catalog

BackendCrateWrapsPlatformKey characteristic
FileDiskdisk_fileHost fileCross-platformSimplest backend. Blocking I/O via unblock().
Vhd1Diskdisk_vhd1VHD1 fixed fileCross-platformParses VHD footer for geometry.
VhdmpDiskdisk_vhdmpWindows vhdmp driverWindowsDynamic and differencing VHD/VHDX.
BlobDiskdisk_blobHTTP / Azure BlobCross-platformRead-only. HTTP range requests.
BlockDeviceDiskdisk_blockdeviceLinux block deviceLinuxio_uring, resize via uevent, PR passthrough.
NvmeDiskdisk_nvmePhysical NVMe (VFIO)Linux/WindowsUser-mode NVMe driver. Resize via AEN.
StripedDiskdisk_stripedMultiple DisksCross-platformStripes data across underlying disks.

Decorators

Decorators wrap another Disk and transform I/O in transit. Features compose by stacking decorators without modifying the backends underneath.

DecoratorCrateTransform
CryptDiskdisk_cryptXTS-AES-256 encryption. Encrypts on write, decrypts on read.
DelayDiskdisk_delayAdds configurable latency to each I/O operation.
DiskWithReservationsdisk_prwrapIn-memory SCSI persistent reservation emulation.

Layered disks

A LayeredDisk composes multiple layers into a single DiskIo implementation. Each layer tracks which sectors it has; reads fall through from top to bottom until a layer has the requested data. This powers the memdiff: and mem: CLI options.

Two layer implementations exist today:

The storage pipeline page covers the full architecture: how frontends, backends, decorators, and the layered disk model connect, plus cross-cutting concerns like online disk resize and virtual optical media.