Running OpenVMM
This page offers a high-level overview of different ways to launch and interact with OpenVMM.
These examples are by no means "exhaustive", and should be treated as a useful jumping-off point for subsequent self-guided experimentation with OpenVMM.
Obtaining a copy of OpenVMM
To get started, ensure you have a copy of the OpenVMM executable and its runtime dependencies, via one of the following options:
Building OpenVMM Locally
Follow the instructions on: Building OpenVMM.
Pre-Built Binaries
If you would prefer to try OpenVMM without building it from scratch, you can download pre-built copies of the binary from OpenVMM CI.
Simply select a successful pipeline run (should have a Green checkbox), and
scroll down to select an appropriate *-openvmm artifact for your particular
architecture and operating system. You must be signed into GitHub in order
to download artifacts.
Examples
These examples all use cargo run --, with the assumption that you are a
developer building your own copy of OpenVMM locally!
To run these examples using a pre-compiled copy of OpenVMM, swap cargo run -- with /path/to/openvmm.
When running via cargo run, environment variables in .cargo/config.toml
automatically point OpenVMM to the mu_msvm UEFI firmware (MSVM.fd)
downloaded by cargo xflowey restore-packages.
When running the openvmm binary directly, these environment variables are
not set, and you will get:
fatal error: must provide uefi firmware when booting with uefi
To fix this, explicitly pass the firmware using --uefi-firmware:
openvmm --uefi --uefi-firmware path/to/MSVM.fd --disk memdiff:path/to/disk.vhdx
If you ran cargo xflowey restore-packages, the firmware is at:
.packages/hyperv.uefi.mscoreuefi.x64.RELEASE/MsvmX64/RELEASE_VS2022/FV/MSVM.fd # x64
.packages/hyperv.uefi.mscoreuefi.AARCH64.RELEASE/MsvmAARCH64/RELEASE_VS2022/FV/MSVM.fd # aarch64
If you used cargo xflowey vmm-tests --build-only --dir <out>, the firmware
is copied into that output directory under the same relative path.
Alternatively, set the environment variable so you don't need the flag each time:
# x64
export X86_64_OPENVMM_UEFI_FIRMWARE=path/to/MSVM.fd
# aarch64
export AARCH64_OPENVMM_UEFI_FIRMWARE=path/to/MSVM.fd
If you run into any issues, please refer to Troubleshooting.
Preface: Quitting OpenVMM
By default, OpenVMM will connect the guests's COM1 serial port to the current terminal session, forwarding all keystrokes directly to the VM.
As such, a simple ctrl-c does not suffice to quit OpenVMM!
Instead, you can type crtl-q to enter OpenVMM's interactive console, and enter q to quit.
Sample Linux Kernel, via direct-boot
This example will launch Linux via direct boot (i.e: without going through UEFI
or BIOS), and appends single to the kernel command line.
The Linux guest's console will be hooked up to COM1, and is relayed to the host terminal by default.
To launch Linux with an interactive console into the shell within initrd, simply run:
cargo run
This works by setting the default [env] vars in .cargo/config.toml to
configure OpenVMM to use a set of pre-compiled test kernel + initrd images,
which are downloaded as part of the cargo xflowey restore-packages command.
Note that this behavior only happens when run via cargo run (as cargo is the
tool which ensures the required env-vars are set).
The source for the sample kernel + initrd can be found on the microsoft/openvmm-deps repo.
The kernel and initrd can be controlled via options:
--kernel <PATH>: The kernel image. Must be an uncompressed kernel (vmlinux, not bzImage).--initrd <PATH>: The initial ramdisk image.-c <STRING>or--cmdline <STRING>: Extra kernel command line options, such asroot=/dev/sda.
Windows, via UEFI
This example will launch a modern copy of Windows via UEFI, using the mu_msvm
firmware package.
A copy of the mu_msvm UEFI firmware is automatically downloaded via cargo xflowey restore-packages.
cargo run -- --uefi --disk memdiff:path/to/windows.vhdx --gfx
For more info on --gfx, and how to actually interact with the VM using a
mouse/keyboard/video, see the Graphical Console
docs.
The file windows.vhdx can be any format of VHD(X).
Note that OpenVMM does not currently support using dynamic VHD/VHDX files on Linux hosts. Unless you have a fixed VHD1 image, you will need to convert the image to raw format, using the following command:
qemu-img convert -f vhdx -O raw windows.vhdx windows.img
Also, note the use of memdiff, which creates a memory-backed "differencing
disk" shim between the VMM and the backing disk image, which ensures that any
writes the VM makes to the VHD are not persisted between runs. This is very
useful when iterating on OpenVMM code, since booting the VM becomes repeatable
and you don't have to worry about shutting down properly. Use file instead for
normal persistent storage.
OpenHCL, via Linux Direct Boot
This example will boot OpenHCL in Linux direct mode, running a minimal shell
inside VTL2. This is the same configuration used by the openhcl_linux_direct_x64
integration tests.
First, build the test artifacts from Linux or WSL using vmm-tests --build-only.
The IGVM must be built on Linux:
cargo xflowey vmm-tests --build-only --dir <out> --target windows-x64
If you only need the IGVM binary (and already have openvmm.exe), you can
use cargo xflowey build-igvm instead — it's faster than building the full
test suite.
This places openvmm.exe and openhcl-x64-test-linux-direct.bin in the
<out> directory. Then, on Windows, from the <out> directory:
.\openvmm.exe `
--hv `
--vtl2 `
--igvm openhcl-x64-test-linux-direct.bin `
-c "panic=-1 reboot=triple UNDERHILL_SERIAL_WAIT_FOR_RTS=1 UNDERHILL_CMDLINE_APPEND=rdinit=/bin/sh" `
-m 2GB `
--vmbus-com1-serial "term,name=VTL0 Linux" `
--com3 "term,name=VTL2 OpenHCL" `
--vtl2-vsock-path $env:temp\ohcldiag-dev
The --vmbus-com1-serial flag is required when using rdinit=/bin/sh.
The shell running as PID 1 needs a controlling terminal (tty) — without one
it exits immediately, causing a kernel panic and infinite reboot loop.
The --com3 flag is optional but recommended — it gives you VTL2 (OpenHCL)
kernel console output for debugging.
For more details on running OpenHCL on OpenVMM, including VMBus relay and device assignment, see Running OpenHCL: OpenVMM.
Alpine Linux, via Direct Boot
See the dedicated Alpine Linux guide for a full walkthrough of booting Alpine from a cloud disk image using direct boot with PCIe and virtio-blk.
DOS, via PCAT BIOS
While DOS in particular is not a scenario that the OpenVMM has heavily invested in, the fact DOS is able to boot in OpenVMM serves as a testament to OpenVMM's solid support of legacy x86 devices and infrastructure.
The following command will boot a copy of DOS from a virtual floppy disk, using the Hyper-V PCAT BIOS.
Booting via PCAT is not just for DOS though! Many older operating systems, including older copies of Windows / Linux, require booting via BIOS.
cargo run -- --pcat --gfx --floppy memdiff:/path/to/msdos.vfd --pcat-boot-order=floppy,optical,hdd