Table of Contents
A virtual machine is a virtual computer (the "guest") running inside another computer (the "host"). Virtual machines are useful for testing, running different operating systems, isolating parts of a system, and more.
nvmm(4) (NetBSD Virtual Machine Monitor) is NetBSD's native hypervisor. In regular usage, it's used as an "accelerator" for the QEMU virtual machine software. It will make virtual machines on your NetBSD host run faster by taking advantage of CPU virtualization extensions. Currently, a CPU that supports AMD SVM or Intel VMX is required, but more backends for other architectures may be added in the future. QEMU can also be used without an accelerator, with significantly reduced performance.
Other hypervisors supported by NetBSD include Intel HAXM (also used with QEMU), and Xen, which has quite a different design.
When running modern operating systems as VM guests, you will generally want to use para-virtualized I/O, rather than having QEMU emulate real hardware devices. On NetBSD, this is supported with the virtio(4) drivers.
Many computers (especially laptops) have hardware virtualization capabilities disabled by default. You may need to enable the necessary features from the firmware at boot.
Before loading the NVMM module, make sure the modules
in /stand
are correct and up-to-date for the version
of the NetBSD kernel you are using.
The NetBSD Virtual Machine Monitor isn't active by default.
It must be activated by loading the nvmm
module with
modload(8):
#
modload nvmm
Verify NVMM is loaded with modstat(8):
#
modstat | grep nvmm
nvmm misc filesys - 0 - -
You can load the module automatically at boot time by adding this
line to /etc/modules.conf
:
nvmm
Loading NVMM at boot time will also allow the system to run with a secmodel_securelevel(9) of 1, which prevents loading modules after boot. However, since NVMM blocks things like suspend, you may wish to unload it:
#
modunload nvmm
By default the /dev/nvmm
device is owned by the
root
user.
You probably want to run virtual machines as a non-root user
for security reasons, so set the owner of the /dev/nvmm
device to something reasonable:
#
chown nia:wheel /dev/nvmm
On a machine containing lots of untrusted VMs, you may wish to create a dedicated user or group for them with useradd(8) and groupadd(8).
You can see NVMM's current status with nvmmctl(8):
$
nvmmctl identify
nvmm: Kernel API version 2 nvmm: State size 1008 nvmm: Max machines 128 nvmm: Max VCPUs per machine 256 nvmm: Max RAM per machine 128G nvmm: Arch Mach conf 0 nvmm: Arch VCPU conf 0x3<CPUID,TPR> nvmm: Guest FPU states 0x3<x87,SSE>
QEMU is a CPU emulator and virtual machine that can use
NVMM as an accelerator. It isn't included with NetBSD by default.
However, it is available in pkgsrc
as emulators/qemu
, and can be
installed with pkgin:
#
pkgin install qemu
This command starts a VM in an X11 window with NVMM acceleration, the same CPU type as the host machine, two CPU cores, and one gigabyte of memory:
$
qemu-system-x86_64 -accel nvmm -cpu max -smp cpus=2 -m 1G \ -display sdl,gl=on \ -cdrom NetBSD-9.1-amd64.iso
The guest system will be much slower without acceleration as every CPU instruction will have to be emulated.
You should also be able to see the virtual machine running with nvmmctl(8):
$
nvmmctl list
Machine ID VCPUs RAM Owner PID Creation Time ---------- ----- ---- --------- ------------------------ 0 2 147M 10982 Sat May 8 10:09:59 2021
Generally, you will want to create a virtual drive to contain your
virtual machine on the host. We’ll want to create a
qcow2
image
because it provides better performance and is more versatile than a
raw image:
$
qemu-img create -f qcow2 netbsd.qcow2 16G
A VirtIO block device provides the best performance.
Add the following arguments to qemu-system-x86_64
to use it:
-drive file=netbsd.qcow2,if=none,id=hd0 \
-device virtio-blk-pci,drive=hd0
Older operating systems may not have VirtIO drivers, in which case you can use a normal emulated disk:
-hda netbsd.qcow2
Operating systems require a good source of randomness for system security, cryptography, and so on. In a VM, this is ideally provided by the host machine, which has greater access to the underlying hardware. You can easily attach a VirtIO random number generator device with the following arguments to QEMU:
-object rng-random,filename=/dev/urandom,id=viornd0 \
-device virtio-rng-pci,rng=viornd0
This requires no extra configuration on the host machine.
Entropy is generally required for secure communications. For more information on entropy, refer to entropy(7).
The simplest way to set up networking with QEMU is so-called "user networking". This will mean raw socket operations like ping(8) won’t work, but normal TCP/IP protocols like HTTP/FTP/etc will work. Another way is with bridged networking, see Section 30.3, “Configuring bridged networking on a NetBSD host”.
The most performant device type is
virtio-net-pci
:
-netdev user,id=vioif0 -device virtio-net-pci,netdev=vioif0
To use older guest operating systems that don’t support VirtIO, Intel Gigabit Ethernet is a good choice:
-netdev user,id=wm0 -device e1000,netdev=wm0
Or an AMD PCnet card, for very old guest operating systems:
-netdev user,id=pcn0 -device pcnet,netdev=pcn0
On a NetBSD host, the following QEMU arguments may be used to enable audio:
-audiodev oss,id=oss,out.dev=/dev/audio,in.dev=/dev/audio \
-device ac97,audiodev=oss
ac97
is the classic standardized sound driver
for x86 systems.
You may wish to change the /dev/audioX
device being used, see
Chapter 10, Audio.
You may need to adjust things further to get smooth playback, see Section 30.4.3, “Smooth audio playback and latency in VMs”.
These arguments will create an X11 window with OpenGL enabled (for smooth scaling if the window is resized), using a VMware-compatible VGA device, and an USB mouse:
-display sdl,gl=on -vga vmware \
-usb -device usb-mouse,bus=usb-bus.0
There is a VMware video driver included with X11 on NetBSD, so the display will automatically configure when startx(1) runs and can be adjusted with xrandr(1).
A VNC display will allow remote access from a VNC client like
net/tigervnc
, useful when running QEMU
with --daemonize
on a server:
-display vnc=unix:/home/nia/.qemu-myvm-vnc -vga vmware
-usb -device usb-mouse,bus=usb-bus.0
A simpler option is a curses
display,
preferable for systems that don’t
need more than text output in a terminal:
-display curses
For more information on configuring X11, see Chapter 9, X.
For more information on securely configuring VNC, see QEMU’s online documentation on VNC.
While QEMU user networking is easy to use and doesn't require root privileges, it's generally slower than bridged networking using a tap(4) device, and doesn't allow the use of diagnostic tools like ping(8) inside the guest.
To configure bridged networking on a NetBSD host, you must first make note of your host machine’s primary network interace. Find the one with an address assigned and a route to the outside world with ifconfig(8).
In this example, the host machine’s primary interface is
wm0
. All of these commands run on the host machine.
Create a virtual tap(4) interface:
#
ifconfig tap0 create
#
ifconfig tap0 descr "NetBSD VM" up
Create a bridge(4) connecting the actual interface and the virtual interface:
#
ifconfig bridge0 create
#
ifconfig bridge0 descr "LAN VM bridge" up
#
brconfig bridge0 add tap0 add wm0
Configure NetBSD to do this all at boot time by editing
/etc/ifconfig.tap0
:
create descr "NetBSD VM" up ! ifconfig bridge0 create ! ifconfig bridge0 descr "LAN VM bridge" up ! brconfig bridge0 add tap0 add wm0
You can now pass the arguments to QEMU to run with bridged networking:
-netdev tap,id=tap0,ifname=tap0,script=no -device virtio-net-pci,netdev=tap0
For more information on NetBSD network configuration, see Chapter 24, Setting up TCP/IP on NetBSD in practice.
AVOID UNCLEAN SHUTDOWNS! This means pressing Ctrl+C or killing the virtual machine. In QEMU, the disks will rarely be synced, and data loss will almost certainly occur.
You may wish to add the log,noatime
mount options in /etc/fstab
next to
rw
to speed up fsck(8).
You can also enable the sync
option,
but this will significantly decrease performance.
Always shut down NetBSD safely using the shutdown(8) command and make backups.
QEMU's networking will sometimes configure an invalid IPv6 route on IPv4-only configurations, meaning programs like the NetBSD packaging tools will prefer IPv6 and spend a long time timing out before succeeding.
Work around this by editing /etc/rc.conf
to prefer IPv4 addresses:
ip6addrctl=YES ip6addrctl_policy="ipv4_prefer"
Virtual machines cannot generally provide the same smooth playback at low latency that real hardware provides. For smooth playback, you may need to increase NetBSD's audio latency inside the VM:
$
sysctl -w hw.audio0.blk_ms=100
To set this automatically automatically at boot time, add it to
/etc/sysctl.conf
.
You can test audio output in the VM. Ensure that audiocfg(1) plays a continuous beep for each channel:
$
audiocfg test 0
On physical hardware where the display resolution is already set properly by the kernel, doing this will disable graphical acceleration.
If you want to increase the size of the x86 console, enter the following at the NetBSD boot prompt:
> vesa 1024x768x32
This setting can be made permanent in
/boot.cfg
.