Xen is a Type 1 hypervisor which supports running multiple guest operating systems on a single physical machine. One uses the Xen kernel to control the CPU, memory and console, a dom0 operating system which mediates access to other hardware (e.g., disks, network, USB), and one or more domU operating systems which operate in an unprivileged virtualized environment. IO requests from the domU systems are forwarded by the Xen hypervisor to the dom0 to be fulfilled.

This document provides status on what Xen things work on NetBSD (upstream documentation might say something works if it works on some particular Linux system).

This document is also a HOWTO that presumes a basic familiarity with the Xen system architecture, with installing NetBSD on amd64 hardware, and with installing software from pkgsrc. See also the Xen website.

If this document says that something works, and you find that it does not, it is best to ask on port-xen and if you are correct to file a PR.

See also an [earlier Xen tutorial] (http://wiki.netbsd.org/tutorials/how_to_set_up_a_guest_os_using_xen3/) which should perhaps be folded into this HOWTO.

  1. Overview
  2. Creating a NetBSD dom0
  3. Unprivileged domains (domU)
  4. domU general setup
  5. domU setup for specific systems
  6. Miscellaneous Information
  7. NetBSD as a domU in a VPS

Overview

The basic concept of Xen is that the hypervisor (xenkernel) runs on the hardware, and runs a privileged domain ("dom0") that can access disks/networking/etc. One then runs additional unprivileged domains (each a "domU"), presumably to do something useful.

This HOWTO addresses how to run a NetBSD dom0 (and hence also build xen itself). It also addresses how to run domUs in that environment, and how to deal with having a domU in a Xen environment run by someone else and/or not running NetBSD.

At system boot, the dom0 kernel is loaded as a module with Xen as the kernel. The dom0 can start one or more domUs. (Booting is explained in detail in the dom0 section.)

There are many choices one can make; the HOWTO recommends the standard approach and limits discussion of alternatives in many cases.

CPU Architecture

Xen runs on x86_64 hardware (the NetBSD amd64 port).

There is a concept of Xen running on arm32 and aarch64, but there are thus far no reports of this working with NetBSD.

The dom0 system must be amd64.

The domU can be i386 PAE or amd64. It can be various operating systems.

Guest Styles

Xen supports different styles of guests. See https://wiki.xenproject.org/wiki/Virtualization_Spectrum for a discussion.

This table shows the styles, if a NetBSD dom0 can run in that style, if a NetBSD dom0 can support that style of guest in a domU, and if NetBSD as a domU can support that style.

Style of guest dom0 can be? dom0 can support? domU can be? needs Hardware Virtualization Support
PV yes yes yes no
PVH 10+(exp) 10+ 10+ yes
HVM N/A yes yes yes
PVHVM N/A yes 10+ yes

In PV (paravirtualized) mode, the guest OS does not attempt to access hardware directly, but instead makes hypercalls to the hypervisor; PV guests must be specifically coded for Xen. See PV.

Various PVH/HVM modes need hardware support for virtualization. This support is called "VT-X" by Intel, and is denoted by the cpuflag VMX. AMD's support is called AMD-V and denoted by the cpuflag SVM. While these features are not identical, Xen can use either.

There have been two PVH modes: original PVH and PVHv2. Original PVH was based on PV mode and is no longer relevant at all. Therefore PVHv2 is written as PVH, here and elsewhere. PVH is basically lightweight HVM with PV drivers. A critical feature of it is that qemu is not needed; the hypervisor can do the emulation that is required. Thus, a dom0 can be PVH. The source code uses PVH and config files use pvh -- these refer to PVHv2. See PVH(v2).

With a CPU having the EPT feature, PVH is substantially more efficient than PV because it uses hardware-assisted virtualization. Without EPT, benchmarking results posted in 2024-08 indicate that it is slower for x86_64 guests.

Some CPUs have a feature Hardware-Assisted Paging (HAP). \todo Explain how to tell and what it is used for.

Some Intel CPUs support VT-d, also referred to as IOMMU. This can allow the hardware to mediate PCI passthrough efficiently and safely.

In HVM (Hardware Virtual Machine) mode, guest operating systems with no knowledge of or accomodation for Xen can be run. The dom0 runs qemu to emulate hardware other than the processor. It is therefore non-sensical to have an HVM dom0, because there is no underlying system to provide emulation. HVM can be useful to work around bugs even if some other mode could be used.

In PVHVM mode, the guest runs as HVM, but additionally uses PV drivers for efficiency. Therefore it is non-sensical for to have a PVHVM dom0. See PV on HVM. PCI passthrough works on PVHVM. Booting uses the domU's boot blocks and a kernel stored in the domU's filesystem. This can be useful in VPS situations where the owner of the domU has no access to the dom0.

With a CPU having the EPT feature, and perhaps HAP and IOMMU, (PV)HVM appears more efficient than PV. With an older CPU lacking these features, it can be very slow.

\todo Explain more about features and how to tell (xl dmesg), and summarize benchmark results more acccurately, once there are a few more.

Xen Versions

In NetBSD, Xen is provided in pkgsrc, via matching pairs of packages xenkernel and xentools. We will refer only to the kernel versions, but note that both packages must be installed together and must have matching versions.

Of the versions released by upstream, a subset are available in pkgsrc:

Xen Version Package Name Xen CPU Support Security EOL Feature EOL
4.15 xenkernel415 x86_64 2024-04-08 2022-10-08
4.18 xenkernel418 x86_64 2026-11-16 2025-05-16

See also the Xen Support Matrix page and the Xen Security Advisory page.

As of July 2024, both 4.15 and 4.18 generally work well on NetBSD 9 through current with i386 and amd64 guests. The standard approach is 4.18. However, there is a timekeeping issue on some systems with 4.18; see "Timekeeping Woes" below.

Note that with 4.18, i386 PV guests are no longer directly supported; see the "Creating a NetBSD PV Shim domU" section below for the required minor adjustment.

As of July 2024, there are no efforts to package 4.19 (not yet released), and there are hints that 4.20 (not even listed on the support matrix page yet) might be packaged. Note that 4.18 is the current stable release of Xen; pkgsrc is not behind.

NetBSD versions

This HOWTO addresses NetBSD 9 and later; information about EOL NetBSD versions has been pruned.

Xen has been supported in NetBSD for a long time, at least since 2005. NetBSD Xen has always supported PV (originally, Xen simply was PV), in both dom0 and domU.

NetBSD 8 and later as a dom0 supports running HVM domUs.

NetBSD 10 and later as a domU can be run in PVH and PVHVM modes, and s a dom0 can (experimentally) be run in PVH mode.etBSD up to and including NetBSD 9 as a dom0 cannot safely run SMP (due to inadequate locking). NetBSD 10 supports SMP in dom0, and XEN3_DOM0 includes "options MULTIPROCESSOR".

NetBSD 6 and later, when run as a domU, can run SMP, using multiple CPUs if provided. The XEN3_DOMU kernel is built with "options MULITPROCESSOR".

Note that while the current version of Xen is 4.X, the kernel support is still called XEN3, because the hypercall interface has not changed significantly and because renaming it would not be useful.

See also NetBSD Xen daily test results for automated tests of various domU styles and the scripts to run the tests as a source of configuration hints.

Creating a NetBSD dom0

In order to install a NetBSD as a dom0, one first installs a normal NetBSD system, and then pivots the install to a dom0 install by changing the kernel and boot configuration.

Installation of NetBSD

Install NetBSD/amd64 just as you would if you were not using Xen. Therefore, use the most recent release, or a build from the most recent stable branch. Alternatively, use -current, being mindful of all the usual caveats of lower stability of current, and likely a bit more so. Think about how you will provide storage for disk images.

Installation of Xen

Building Xen

Use the most recent version of Xen in pkgsrc, unless the DESCR says that it is not suitable. Therefore, choose 4.18. In the dom0, install xenkernel418 and xentools418 from pkgsrc.

Once this is done, copy the Xen kernel from where pkgsrc puts it to where the boot process will be able to find it:

# cp -p /usr/pkg/xen418-kernel/xen.gz /

Then, place a NetBSD XEN3_DOM0 kernel in the / directory. Such kernel can either be taken from a local release build.sh run, compiled manually, or downloaded from the NetBSD FTP, for example at:

ftp.netbsd.org/pub/NetBSD/NetBSD-10.0/amd64/binary/kernel/netbsd-XEN3_DOM0.gz

Configuring booting

Read the Xen command-line options documentation.

Read boot.cfg(8) carefully. Add lines to /boot.cfg to boot Xen, adjusting for your root filesystem:

/boot.cfg

menu=Xen:load /netbsd-XEN3_DOM0.gz bootdev=wd0a rndseed=/var/db/entropy-file console=pc;multiboot /xen.gz dom0_mem=1024M
menu=Xen single user:load /netbsd-XEN3_DOM0.gz rndseed=/var/db/entropy-file bootdev=wd0a console=pc -s;multiboot /xen.gz dom0_mem=1024M

The dom0_mem parameter specifies how much RAM should be seen by the dom0 kernel, with the rest (less Xen overhead) available for domU use. If not given, almost all or all memory will be assigned to the dom0. Do not start out with a very low value, because diagnosing too-low dom0 RAM is difficult. In 2018-05, trouble booting a dom0 was reported with 256M of RAM: with 512M it worked reliably. This does not make sense, but if you see "not ELF" after Xen boots, try increasing dom0 RAM.

"bootdev" (or the earlier form "root") is also in general required, because the boot device from /boot is not passed via Xen to the dom0 kernel.

"userconf" statements intended for NetBSD should be attached to the load statement, not the multiboot statement. (\todo Validate and test an example.)

NB: This says add, not replace, so that you will be able to more easily boot a NetBSD kernel without Xen. Once Xen boots ok, you may want to set it as default. It is highly likely that you will have trouble at some point, and keeping an up-to-date GENERIC for use in fixing problems is the standard prudent approach.

Setting up the dom0 as PVH

A PVH dom0 is currently experimental, as of Xen 4.18. For 4.19 and later, not yet in pkgsrc, it is supported with caveats, notably that SR-IOV is missing and PCI passthrough does not work.

Note that while Xen upstream calls it experimental, the NetBSD/xen community seems to have one person who has gotten it to work and one who is trying but so far not succeeded. This is a clue!

First, get a PV dom0 working following the instructions above.

While GENERIC works as a PV guest, it does not have DOM0 support (which is present in XEN3_DOM0, but that's reduced to be PV only). Prepare a GENERIC kernel with the additions necessary to be a DOM0, which consists of code to deal with DOM0 requests, and PV drivers for the dom0 part of events, network interfaces, and disks.

options     DOM0OPS
pseudo-device   xenevt
pseudo-device   xvif
pseudo-device   xbdback

# Likely only necessary if you have these devices and booting with them crashes.
no i915drmkms*     at pci?
no radeon*         at pci?
no nouveau*        at pci?
no options      DRM_LEGACY

no options DDB_COMMANDONENTER
#options DDB_COMMANDONENTER="trace"

As an alternative to disabling graphic devices in the kernel, add to /boot.cfg:

userconf=disable i915drmkms*
userconf=disable nouveau*
userconf=disable radeon*

Then, create a copy of the boot line, adding dom0=pvh near dom0_mem, and instead of loading a XEN3_DOM0 kernel, load the modified GENERIC.

GENERIC with DOM0OPS and the extra dom0-side PV drivers should work fine on bare metal and if not it's a bug -- but so far we have no reports either way. In the glorious future, it will be normal to run a dom0 as PVH (because it's faster), and to run a domU as PVH or PVHVM (because it's faster), and thus: - DOM00PS and xenevt, xvif, and xbdback will be in GENERIC - XEN3_DOM* kernels will be non-preferred and perhaps not used much

\todo Validate that these instructions are correct, and that they work.

Selecting the console for the boot blocks

See boot_console(8). Understand that you should start from a place of having the console setup correct for booting GENERIC before trying to configure Xen.

Generally, one sets the console in bootxx_ffsv1 or equivalent, and this is passed on to /boot (where one typically does not set the console). This configuration should also be in place for Xen systems, to allow seeing messages and typing to /boot. It is necessary for proper booting of GENERIC as a rescue/fallback.

Selecting the console for Xen

There is currently no mechanism to pass console configuration from boot blocksto the hypervisor (via multiboot). Thus, in addition to configuring the console in the boot blocks (for operation during loading Xen and the dom0, before Xen starts, and for GENERIC), one must also configure it for Xen (if non-default behavior is desired).

See the console= xen command line option in xen-command-line(7), which says it defaults to "com1,vga". This means that Xen will use both the VGA device and the first serial port (which it calls "com1"; NetBSD calls it "com0") as the console, sending output to both simultaneously. (UART settings should be configured by boot blocks, but see the com1 argument for unusual sitautions.)

When Xen uses a serial console, it manages the serial port and, after booting, forwards it it to the dom0's xencons(4) device. An exception is that when Xen receives the "comswitch" character ( by default) three times in quick succession, it will take over serial I/O until three more presses. Xen console interactions can be used for various debugging features which are not usually present in the default /xen kernel.

When Xen uses vga as a console, the vga console is relinquished at the conclusion of hypervisor boot, before the dom0 is started. See also "vga=keep". See also "vga=current" to prevent Xen from re-initialising the VGA hardware.

Xen can be configured to explicitly use only a serial port console, e.g.

/boot.cfg

menu=Xen:load /netbsd-XEN3_DOM0.gz rndseed=/var/db/entropy-file bootdev=sd0; multiboot /xen.gz dom0_mem=1024M console=com1

On Xen's console, one can type the escape character (default ^A) three times to switch input from Xen to the dom0. (See ddb(4) for how to get into DDB, which differs from non-Xen amd64.) \todo Xen's vga console is probably output only, but this needs clarifying.

Selecting the console for NetBSD dom0

Note that com0 is never available to a NetBSD dom0; it is instead claimed by Xen, whether or not Xen uses it for a console. \todo Validate this and reference the Xen documentation.

There are two paths: serial console and vga console. By default XEN3_DOM0 uses xencons(4) as its console. It will use vga if console=pc has been passed.

Using xencons(4) is only sensible if Xen is using a serial console, because Xen will have relinquished the vga console. NetBSD will connect its console(4) device to xencons(4) and console I/O will happen there, and thus on the console Xen is using. To use the default xencons(4) for NetBSD and force Xen to serial only:

/boot.cfg

menu=Xen:load /netbsd-XEN3_DOM0.gz rndseed=/var/db/entropy-file bootdev=sd0; multiboot /xen.gz dom0_mem=1024M console=com1

To have NetBSD use VGA for a console, add "console=pc" to the NetBSD part of the boot command. The following example also forces Xen to VGA only.

/boot.cfg

menu=Xen:load /netbsd-XEN3_DOM0.gz rndseed=/var/db/entropy-file bootdev=sd0 console=pc; multiboot /xen.gz dom0_mem=1024M console=vga

If using serial console with Xen, the default of xencons will be correct. NetBSD will obtain input from xencons(4) and send output there. Xen will pass the output to the serial port, and will pass input from the serial port to NetBSD, except for commands for the hypervisor. Thus configure

menu=Xen:load /netbsd-XEN3_DOM0.gz bootdev=sd0; multiboot /xen.gz console=com1 dom0_mem=1024M

to force Xen to use serial only, and for NetBSD implicitly to use xencons(4).

Xen Options Related to NetBSD Versions

See xen-command-line.html(7), but other than dom0 memory and max_vcpus, tuning options are not generally necessary.

When the dom0 kernel is NetBSD 9 before 2021-04-17 (9.3 is ok), Xen 4.15 and later require "dom0=msr-relaxed=1" on the boof.cfg line. (See /src/sys/arch/x86/x86/pmap.c revision 1.410.) However, no one should be running early NetBSD 9 in 2024 or later, so fix that instead.

With NetBSD 9 and below, one could add dom0_max_vcpus=1 dom0_vcpus_pin, to force only one vcpu to be provided and to pin that vcpu to a physical CPU. \todo Explain if anyone has ever actually measured that this helps, or delete it entirely.

With NetBSD 10 and up, there does not seem to be an argument that pinning or limiting CPUs is a good idea.

From discussion on port-xen@, some machines have problems with timekeeping, and sometimes having one CPU, and further pinned, seems to help.

Xen Options Related to CPU Features

In general, Xen detects CPU features and uses them, as upstream judges prudent. Occasionally this goes not go well. Mostly it works, and in general one should enable "virtualization support" (often VMX) in the BIOS, as well as VT-d. The general strategy if Xen crashes on boot is to use a serial console, and if not possible/practical, record high-speed video of the vga console. Even with that, the reason for a crash may not be apparent, leading to disabling everything one can, until it boots and then removing disable statements to find the minimal set.

In 2024-09, on an Intel i7-12700K, Xen 4.18 crashed on boot, until CET indirect branch tracking was disabled, via the "cet=no-ibt" xen boot argument.

In 2024-09, on an Intel i7-12700K under Xen 4.18.0, AVX2 instructions resulted in a privileged opcode fault. ccache in particular tries to use AVX2 if the flag is set in cpufeatures. See XSA-435. Adding spec-ctrl=gds-mit=no or spec-ctrl=no-gds-mit did not help; ccache still crashed. Adding (an unreasonably large hammer) spec-ctrl=no did not help either. This seems like a bug as Xen loads microcode which should have the mitigation. However, our Xen 4.18 is at .0, vs .3.

rc.conf

Ensure that the boot scripts installed in /usr/pkg/share/examples/rc.d are in /etc/rc.d, either because you have PKG_RCD_SCRIPTS=yes, or manually. (This is not special to Xen, but a normal part of pkgsrc usage.)

Set xencommons=YES in rc.conf:

/etc/rc.conf

xencommons=YES

\todo Recommend for/against xen-watchdog.

balloon

The Xen world widely cautions against using the balloon driver in the dom0. The standard advice for production servers is to disable it in xl.conf. For machines that are primarily the dom0 but occasionally run guests, allocating all memory to the dom0 and using balloon seems ok.

\todo Understand and describe the interaction of balloon with zfs (which is memory-piggy) in a dom0.

Testing

Now, reboot so that you are running a DOM0 kernel under Xen, rather than GENERIC without Xen.

Once the reboot is done, use xl to inspect Xen's boot messages, available resources, and running domains. For example:

# xl dmesg
... xen's boot info ...
# xl info
... available memory, etc ...
# xl list
Name              Id  Mem(MB)  CPU  State  Time(s)  Console
Domain-0           0       64    0  r----     58.1

Xen logs will be in /var/log/xen.

Issues with xencommons

xencommons starts xenstored, which stores data on behalf of dom0 and domUs. It does not currently work to stop and start xenstored. Certainly all domUs should be shutdown first, following the sort order of the rc.d scripts. However, the dom0 sets up state with xenstored, and is not notified when xenstored exits, leading to not recreating the state when the new xenstored starts. Until there's a mechanism to make this work, one should not expect to be able to restart xenstored (and thus xencommons). There is currently no reason to expect that this will get fixed any time soon. \todo Confirm if this is still true in 2020.

Xen-specific NetBSD issues

There are (at least) two additional things different about NetBSD as a dom0 kernel compared to hardware.

One is that through NetBSD 9 the module ABI is different because some of the #defines change, so there are separate sets of modules in /stand. (Further, zfs in Xen is troubled because of differing MAXPHYS; see the zfs howto for more.) In NetBSD 10 and later, there is only one set of modules.

The other difference is that XEN3_DOM0 does not have exactly the same options as GENERIC. While this is roughly agreed to be in large part a bug, users should be aware of this and can simply add missing config items if desired.

Finally, there have been occasional reports of trouble with X11 servers in NetBSD as a dom0. Some hardware support is intentionally disabled in XEN3_DOM0.

Updating Xen in a dom0

Note the previous advice to maintain a working and tested boot config into GENERIC without Xen.

Updating Xen in a dom0 consists of updating the xenkernel and xentools packages, along with copying the xen.gz into place, and of course rebooting.

If updating along a Xen minor version, e.g. from 4.13.1 to 4.13.2, or from 4.13.2nb1 to 4.13.2nb3, it is very likely that this can be done on a running system. The point is that the xentools programs will be replaced, and you will be using "xl" from the new installation to talk to the older programs which are still running. Problems from this update path should be reported.

For added safety, shutdown all domUs before updating, to remove the need for new xl to talk to old xenstored. Note that Xen does not guarantee stability of internal ABIs.

If updating across Xen minor versions, e.g. from 4.13 to 4.15, the likelihood of trouble is increased. Therefore, 'make replace' of xentools on a dom0 with running domUs is not recommended. A shutdown on all domUs before replacing xentools is likely sufficient. A safer approach is to boot into GENERIC to replace the packages, as then no Xen code will be running. Single user is another option.

Updating NetBSD in a dom0

This is just like updating NetBSD on bare hardware, assuming the new version supports the version of Xen you are running. Generally, one replaces the kernel and reboots, and then overlays userland binaries and adjusts /etc.

Note that one should update both the non-Xen kernel typically used for rescue purposes, as well as the DOM0 kernel used with Xen.

anita (for testing NetBSD)

With a NetBSD dom0, even without any domUs, one can run anita (see pkgsrc/misc/py-anita) to test NetBSD releases, by doing (as root, because anita must create a domU):

anita --vmm=xl test file:///usr/obj/i386/

Unprivileged domains (domU)

This section describes general concepts about domUs. It does not address specific domU operating systems or how to install them. The config files for domUs are typically in /usr/pkg/etc/xen, and are typically named so that the file name, domU name and the domU's host name match.

The domU is provided with CPU and memory by Xen, configured by the dom0. The domU is provided with disk and network by the dom0, mediated by Xen, and configured in the dom0.

Entropy in domUs can be an issue; physical disks and network are on the dom0. NetBSD's /dev/random system works, but is often challenged.

NB: When creating a domU, free memory is consumed in the amount specified in the config file, provided to the domU, and 8 MB additional. If that 8 MB is not available (e.g. domU configured RAM equals xl list free amount), the domU will crash in a confusing manner. \todo File a bug report.

Config files

See /usr/pkg/share/examples/xen/xlexample* for a very small number of examples for running GNU/Linux.

The following is an example minimal domain configuration file. The domU serves as a network file server.

/usr/pkg/etc/xen/foo

name = "domU-id"
kernel = "/netbsd-XEN3PAE_DOMU-i386-foo.gz"
memory = 1024
vif = [ 'mac=aa:00:00:d1:00:09,bridge=bridge0' ]
disk = [ 'file:/n0/xen/foo-wd0,0x0,w',
         'file:/n0/xen/foo-wd1,0x1,w' ]

The domain will have name given in the name setting. The kernel has the host/domU name in it, so that on the dom0 one can update the various domUs independently. The vif line causes an interface to be provided, with a specific mac address (do not reuse MAC addresses!), in bridge mode. Two disks are provided, and they are both writable; the bits are stored in files and Xen attaches them to a vnd(4) device in the dom0 on domain creation. The system treats xbd0 as the boot device without needing explicit configuration.

There is not a type line; that implicitly defines a pv domU. Otherwise, one sets type to the lower-case version of the domU type in the table above; see later sections.

By convention, domain config files are kept in /usr/pkg/etc/xen. Note that "xl create" takes the name of a config file, while other commands take the name of a domain.

Examples of commands:

xl create /usr/pkg/etc/xen/foo
xl console domU-id
xl create -c /usr/pkg/etc/xen/foo
xl shutdown domU-id
xl list

Typing ^] will exit the console session. Shutting down a domain is equivalent to pushing the power button; a NetBSD domU will receive a power-press event and do a clean shutdown. Shutting down the dom0 will trigger controlled shutdowns of all configured domUs.

Logs

Look in /var/log/xen/* for logs written at creation time.

CPU and memory

A domain is provided with some number of vcpus; any domain can have up to the number of CPUs seen by the hypervisor. For a domU, it is controlled from the config file by the "vcpus = N" directive. It is normal to overcommit vcpus; a 4-core machine machine might well provide 4 vcpus to each domU. One might also configure fewer vcpus for a domU.

A domain is provided with memory; this is controlled in the config file by "memory = N" (in megabytes). In the straightforward case, the sum of the the memory allocated to the dom0 and all domUs must be less than the available memory.

Console

A domU with PV support will use xencons(4) as the console, and one can access this from the dom0 with "xl console".

Virtual disks

In domU config files, disks can be defined by key-value pairs or as a sequence of 3-tuples. See the [upstream documentation] (https://xenbits.xenproject.org/docs/4.18-testing/man/xl-disk-configuration.5.html).

Read the man page carefully. Note that NetBSD has a culture of using deprecated positional syntax. This HOWTO is converting to keyword syntax.

One big hint is that vdev= must precede target, despite the order in which keywords are documented, and despite the fact that obviously keywords may be in any order. \todo Check and maybe file a bug.

A second hint is that for a target in PV mode, one must give a block device. But for a target in HVM mode, one must give the raw device. If passing a block device, xen tries to transform it and adds a suprious r. \todo Check and maybe file a bug.

A third is that vdev can be 0x0 in PV mode but must be something like hda in HVM mode.

\todo Fold in or gc the following.

For key-value pairs: \todo

For 3-tuples:

Examples (key/value and positional):

/usr/pkg/etc/xen/foo

disk = [ '\todo key/value example',
         'file:/n0/xen/foo-wd0,0x0,w' ]

Note that NetBSD by default creates only vnd[0123]. If you need more than 4 total virtual disks at a time, run e.g. "./MAKEDEV vnd4" in the dom0.

Virtual Networking

Xen provides virtual Ethernets, each of which connects the dom0 and a domU. For each virtual network, there is an interface "xvifN.M" in the dom0, and a matching interface xennetM (NetBSD name) in domU index N. The interfaces behave as if there is an Ethernet with two adapters connected. From this primitive, one can construct various configurations. We focus on two common and useful cases for which there are existing scripts: bridging and NAT.

See the [upstream documentation] (https://xenbits.xen.org/docs/unstable/man/xl-network-configuration.5.html).

With bridging (in the example above), the domU perceives itself to be on the same network as the dom0. For server virtualization, this is usually best. Bridging is accomplished by creating a bridge(4) device and adding the dom0's physical interface and the various xvifN.0 interfaces to the bridge. One specifies "bridge=bridge0" in the domU config file. The bridge must be set up already in the dom0; an example /etc/ifconfig.bridge0 is:

/etc/ifconfig.bridge0

create
up
!brconfig bridge0 add wm0

With NAT, the domU perceives itself to be behind a NAT running on the dom0. This is often appropriate when running Xen on a workstation. \todo NAT appears to be configured by "vif = [ '' ]".

The MAC address specified is the one used for the interface in the new domain. The interface in dom0 will use this address XOR'd with 00:00:00:01:00:00. Random MAC addresses are assigned if not given.

Starting domains automatically

To start domains domU-netbsd and domU-linux at boot and shut them down cleanly on dom0 shutdown, add the following in rc.conf:

/etc/rc.conf

xendomains="domU-netbsd domU-linux"

domU general setup

PV/PVH kernels

For PV/PVH, one specifies the domU's kernel as a file in the dom0 filesystem. The kernel must support PV, and can be either uncompressed or compressed (ending in .gz).

Stub domains

Xen has a concept of stub domains, where the qemu part of HVM is in a domU. \todo Explain better, and once understood, migrate this section to where it belongs.

Boot mechanisms

For PV and PVH domUs, the kernel is specified in the config file and taken from the dom0 filesystem.

For HVM (and PVHVM) domUs, the boot code on the domUs disk is executed.

Sometimes, one wants to run a PV or PVH system but have the kernel obtained from within the domU. This is particularly important when the domU administrator lacks privileges on the dom0, such as VPS setups. In these cases, the domU can be configured to load a bootloader, typically grub, instead of the real kernel. Xen then runs the bootloader, which can read the disk, find the real kernel, and then load and run it.

There have been multiple flavors of grub that can be used this way over the years, and the situation is a little confusing.

See Xen's Booting Overview. and the more detailed pvgrub2 page.

See Debian's pvgrub page.

For information about how specific provider's address booting,
see the sections below about Panix and Tornado VPS. See also Bitfolk's Booting page.

The pkgsrc xentools418 package has pygrub, which is very old. There are no recent reports of anyone using it.

domU setup for specific systems

Many of the following examples advise adding lines to config files. While it (mostly?) doesn't matter how lines are ordered, best practice is to keep "type" lines near "kernel" lines, as they tend to require being changed aat the same time.

Varying drivers in the domU

For a NetBSD domU using PV drivers (PV, PVH, PVHVM), the disks will appear as xbdN. For one using emulated drivers (HVM), the disks will appear as wdN. See fstab(5) which explains how to use "ROOT." instead of xbd0/wd0. enabling switching modes without changing /etc/fstab.

\todo Explain if there is a similar approach for network interfaces.

Creating a NetBSD PV domU

See the earlier config file, and adjust memory. Decide on how much storage you will provide, and prepare it (file or LVM).

While the kernel will be obtained from the dom0 file system, it is helpful but not necessary for the same kernel to be present in the domU as /netbsd so that tools like savecore(8) can work.

The kernel must be specifically built for Xen, to use PV interfaces as a domU. NetBSD release builds provide the following kernels:

    i386 XEN3PAE_DOMU
    amd64 XEN3_DOMU

This will boot NetBSD, but this is not that useful if the disk is empty. One approach is to unpack sets onto the disk outside of Xen (by mounting it, just as you would prepare a physical disk for a system you can't run the installer on).

A second approach is to run an INSTALL kernel, which has a miniroot and can load sets from the network. To do this, copy the INSTALL kernel to / and change the kernel line in the config file to:

    kernel = "/netbsd-INSTALL_XEN3_DOMU"

Then, start the domain as "xl create -c configfile".

Alternatively, if you want to install NetBSD/Xen with a physical CDROM, the following line should be used in the config file.

disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]

After booting the domain, the option to install via CDROM may be selected. The CDROM device should be changed to xbd1d.

Once done installing, "halt -p" the new domain (don't reboot or halt: it would reload the INSTALL_XEN3_DOMU kernel even if you changed the config file), switch the config file back to the XEN3_DOMU kernel, and start the new domain again. Now it should be able to use "root on xbd0a" and you should have a functional NetBSD domU.

TODO: check if this is still accurate. When the new domain is booting you'll see some warnings about wscons and the pseudo-terminals. These can be fixed by editing the files /etc/ttys and /etc/wscons.conf. You must disable all terminals in /etc/ttys, except console, like this:

console "/usr/libexec/getty Pc"         vt100   on secure
ttyE0   "/usr/libexec/getty Pc"         vt220   off secure
ttyE1   "/usr/libexec/getty Pc"         vt220   off secure
ttyE2   "/usr/libexec/getty Pc"         vt220   off secure
ttyE3   "/usr/libexec/getty Pc"         vt220   off secure

Finally, all screens must be commented out from /etc/wscons.conf.

One should also run powerd in a domU, but this should not need configuring. With powerd, the domain will run a controlled shutdown if xl shutdown -R or xl shutdown -H is used on the dom0, via receiving a synthetic power button pressed signal. In NetBSD 9 and later, powerd is enabled by default under Xen (or if ACPI is present).

It is not strictly necessary to have a kernel (as /netbsd) in the domU file system. However, various programs (e.g. netstat) will use that kernel to look up symbols to read from kernel virtual memory. If /netbsd is not the running kernel, those lookups will fail. (This is not really a Xen-specific issue, but because the domU kernel is obtained from the dom0, it is far more likely to be out of sync or missing with Xen.)

Note that NetBSD by default creates only xbd[0123]. If you need more virtual disks in a domU, run e.g. "./MAKEDEV xbd4" in the domU.

Creating a NetBSD PV Shim domU

This is like a PV domU, but has a second copy of xen (shim) between dom0 and domU. Note that this is necessary for i386 PV guests with Xen 4.18. For them, performance seems similar to regular PV with older Xen.

Configure as for pv, but add

type="pvh"
pvshim=1

The domU system itself is unchanged; it still uses a PV (XEN3_DOMU or XEN3PAE_DOMU) kernel, and still sees the same devices. When upgrading from Xen 4.15 to 4.18, the only change needed for an i386 domU is the two lines above.

This use of pvh (for i386 guests) worked ok even on a system that lacked VMX (Intel E5700).

There is 1 data point that it seems to work for amd64 PV guests (on a system with VMX, and one without), but with abysmal performance (on the order of 100x slowdown). \todo Get confirmation from someone else, and decide what to do.

Creating a NetBSD PVH dom0

\todo Expand to say NetBSD 10 and later.

\todo Include needing to add DOM0 OPS and the dom0-flavored psuedodevices to GENERIC.

In boot.cfg, add dom0=pvh (dom0=pv is the default). Configure GENERIC instead of XEN3_DOM0.

Creating a NetBSD PVH domU

Use type='pvh'. Configure a GENERIC kernel instead of XEN3_DOMU.

There is a PR about i386 PVH guests, but they work fine during daily tests. See PR 57199. See NetBSD daily tests.

Creating a NetBSD HVM domU

Use type='hvm'. Use a GENERIC kernel within the disk image, or an INSTALL cd image. The disk image should have bootblocks, as if it were a real machine. (Note that because GENERIC has PV drivers, this will be PVHVM, unless you remove or disable them.)

Perhaps configure serial="pty" to gain access to the domUs first serial port (and hence console?).

This config snippet was stolen from a port-xen post. It might not quite work, as it was in the context of debugging issues apparently from zvol usage.

type = "hvm"
name = "foo"
memory = 2048
vcpus = 2
vif = [ 'mac=00:01:02:ab:cd:ef, bridge=bridge0' ]
disk =  [ 'format=raw, vdev=hda, access=rw,
           target=phy:/dev/zvol/dsk/tank/foo' ]

Creating a NetBSD PVHVM domU

Exactly as HVM, except allow the PV drivers that are in GENERIC to remain.

When a PVHVM guest attaches hypervisor0, which happens before regular devices, code in sys/arch/xen/xen/hypervisor.c:hypervisor_attach() asks the hypervisor to disable emulated disks and network. Thus, despite the guest's kernel supporting emulated disks, and the hypervisor supporting them, such a guest will only see PV disks -- which is the point of PVHVM vs HVM.

Creating a FreeBSD domU

For pvh: use type='pvh'. Configure a generic kernel.

For others: \todo

Creating a Linux PV domU

Creating unprivileged Linux domains isn't much different from unprivileged NetBSD domains, but there are some details to know.

\todo Confirm/rototill.

First, the second parameter passed to the disk declaration (the '0x1' in the example below)

disk = [ 'phy:/dev/wd0e,0x1,w' ]

does matter to Linux. It wants a Linux device number here (e.g. 0x300 for hda). Linux builds device numbers as: (major << 8 + minor). So, hda1 which has major 3 and minor 1 on a Linux system will have device number 0x301. Alternatively, devices names can be used (hda, hdb, ...) as xentools has a table to map these names to devices numbers. To export a partition to a Linux guest we can use:

    disk = [ 'phy:/dev/wd0e,0x300,w' ]
    root = "/dev/hda1 ro"

and it will appear as /dev/hda on the Linux system, and be used as root partition.

To install the Linux system on the partition to be exported to the guest domain, the following method can be used: install sysutils/e2fsprogs from pkgsrc. Use mke2fs to format the partition that will be the root partition of your Linux domain, and mount it. Then copy the files from a working Linux system, make adjustments in /etc (fstab, network config). It should also be possible to extract binary packages such as .rpm or .deb directly to the mounted partition using the appropriate tool, possibly running under NetBSD's Linux emulation. Once the file system has been populated, umount it. If desirable, the file system can be converted to ext3 using tune2fs -j. It should now be possible to boot the Linux guest domain, using one of the vmlinuz-*-xenU kernels available in the Xen binary distribution.

To get the Linux console right, you need to add:

extra = "xencons=tty1"

to your configuration since not all Linux distributions auto-attach a tty to the xen console.

Creating a Solaris domU

See the very outdated Solaris domU instructions.

PCI passthrough: Using PCI devices in guest domains

NB: PCI passthrough only works on some Xen versions and as of 2020 it is not clear that it works on any version in pkgsrc. \todo Reports confirming or denying this should be sent to port-xen@.

The dom0 can give other domains access to selected PCI devices. This can allow, for example, a non-privileged domain to have access to a physical network interface or disk controller. However, keep in mind that giving a domain access to a PCI device most likely will give the domain read/write access to the whole physical memory, as PCs don't have an IOMMU to restrict memory access to DMA-capable device. Also, it's not possible to export ISA devices to non-dom0 domains, which means that the primary VGA adapter can't be exported. A guest domain trying to access the VGA registers will panic.

If the dom0 is NetBSD, it has to be running Xen 3.1, as support has not been ported to later versions at this time.

For a PCI device to be exported to a domU, is has to be attached to the "pciback" driver in dom0. Devices passed to the dom0 via the pciback.hide boot parameter will attach to "pciback" instead of the usual driver. The list of devices is specified as "(bus:dev.func)", where bus and dev are 2-digit hexadecimal numbers, and func a single-digit number:

    pciback.hide=(00:0a.0)(00:06.0)

pciback devices should show up in the dom0's boot messages, and the devices should be listed in the /kern/xen/pci directory.

PCI devices to be exported to a domU are listed in the "pci" array of the domU's config file, with the format "0000:bus:dev.func".

    pci = [ '0000:00:06.0', '0000:00:0a.0' ]

In the domU an "xpci" device will show up, to which one or more pci buses will attach. Then the PCI drivers will attach to PCI buses as usual. Note that the default NetBSD DOMU kernels do not have "xpci" or any PCI drivers built in by default; you have to build your own kernel to use PCI devices in a domU. Here's a kernel config example; note that only the "xpci" lines are unusual.

    include         "arch/i386/conf/XEN3_DOMU"

    # Add support for PCI buses to the XEN3_DOMU kernel
    xpci* at xenbus ?
    pci* at xpci ?

    # PCI USB controllers
    uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)

    # USB bus support
    usb*    at uhci?

    # USB Hubs
    uhub*   at usb?
    uhub*   at uhub? port ? configuration ? interface ?

    # USB Mass Storage
    umass*  at uhub? port ? configuration ? interface ?
    wd*     at umass?
    # SCSI controllers
    ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI

    # SCSI bus support (for both ahc and umass)
    scsibus* at scsi?

    # SCSI devices
    sd*     at scsibus? target ? lun ?      # SCSI disk drives
    cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives

Miscellaneous Information

Grant Table Versions

\todo Explain version 1 and 2 grant tables. Explain which versions are needed by which NetBSD versions and what is default.

See the gnttab Xen command line option.

https://wiki.xenproject.org/wiki/Grant_Table

Grant Table Hypervisor Bug

Some Xen versions return bad values for the 33rd grant table entry. This affects NetBSD 10 always, because it pre-acquires grant table entries. It affects earlier NetBSD and Linux if the 33rd is requested.

http://gnats.netbsd.org/58395

This has so far only been reported at tornadovps, believed to be running 4.14.0.88.g1d1d1f53.

Timekeeping Woes

As of 2024-07, there has been extensive recent discussion about timekeeping problems on dom0 and domU NetBSD systems, perhaps worse with 4.18.

\todo Add link to crisp PR summarizing what we know, after someone(tm) files one.

Configuration of non-NetBSD dom0s to run NetBSD domUs

Apparently one must have "pv-linear-pt=true" in the dom0 circumstances in order for NetBSD domUs to run. This is default, but only available if Xen is compiled with CONFIG_PV_LINEAR_PT.

https://xenbits.xen.org/docs/unstable/misc/xen-command-line.html#pv-linear-pt-x86

Nesting under Linux KVM

It is possible to run Xen and a NetBSD dom0 under Linux KVM. One can enable virtio in the dom0 for greater speed.

Nesting under qemu

It is possible to run Xen and a NetBSD dom0 under qemu on NetBSD, and also with nvmm. \todo Check this.

Other nesting

In theory, any full emulation should be able to run Xen and a NetBSD dom0. The HOWTO does not currently have information about Xen XVM mode, Virtualbox, etc.

NetBSD 5 as domU

NetBSD 5 is known to panic. (However, NetBSD 5 systems should be updated to a supported version.)

NetBSD as a domU in a VPS

The bulk of the HOWTO is about using NetBSD as a dom0 on your own hardware. This section explains how to deal with Xen in a domU as a virtual private server where you do not control or have access to the dom0. This is not intended to be an exhaustive list of VPS providers; only a few are mentioned that specifically support NetBSD.

VPS operators provide varying degrees of access and mechanisms for configuration. The big issue is usually how one controls which kernel is booted, because the kernel is nominally in the dom0 file system (to which VPS users do not normally have access). A second issue is how to install NetBSD. A VPS user may want to compile a kernel for security updates, to run npf, run IPsec, or any other reason why someone would want to change their kernel.

One approach is to have an administrative interface to upload a kernel, or to select from a prepopulated list. Other approaches are pygrub (deprecated) and pvgrub, which are ways to have a bootloader obtain a kernel from the domU file system. This is closer to a regular physical computer, where someone who controls a machine can replace the kernel.

A second issue is multiple CPUs. With NetBSD 6, domUs support multiple vcpus, and it is typical for VPS providers to enable multiple CPUs for NetBSD domUs.

Complexities due to Xen changes

Xen has many security advisories and people running Xen systems make different choices.

stub domains

Some (Linux) dom0 systems use something called "stub domains" to isolate qemu from the dom0 system, as a security and reliabilty mechanism when running HVM domUs. Somehow, NetBSD's GENERIC kernel ends up using PIO for disks rather than DMA. Of course, all of this is emulated, but emulated PIO is unusably slow. This problem is not currently understood.

Grant tables

There are multiple versions of using grant tables, and some security advisories have suggested disabling some versions. NetBSD through 9 uses version 1 and NetBSD-current uses version 2. This can lead to "NetBSD current doesn't run on hosting provider X" situations.

\todo Explain better.

Boot methods

PVHVM

With PVHVM (or HVM), the domU is started running the boot code on the domU disk and will load a kernel from the domU root filesystem, just like a physical machine or qemu. As of early 2024 this is seeming like it might become the preferred method.

pvgrub

pvgrub is a version of grub that uses PV operations instead of BIOS calls. It is booted from the dom0 as the domU kernel, and then reads /grub/menu.lst and loads a kernel from the domU file system.

It appears that grub's FFS code does not support all aspects of modern FFS, but there are also reports that FFSv2 works fine.

pygrub

As of 2014, pygrub seems to be of mostly historical interest. As of 2021, the section should perhaps be outright deleted.

pygrub runs in the dom0 and looks into the domU file system. This implies that the domU must have a kernel in a file system in a format known to pygrub.

pygrub doesn't seem to work to load Linux images under NetBSD dom0, and is inherently less secure than pvgrub due to running inside dom0. For both these reasons, pygrub should not be used, and is only still present so that historical DomU images using it still work.

Kernel in dom0

If you run your own dom0, putting the domU kernel there might be ok.

Specific Providers

The intent is to list providers only if they document support for running NetBSD, and to point to their resources briefly.

panix.com

Panix provides NetBSD as an OS option. See their Colocated Virtual Servers page for more information. NetBSD 9 is available in PV mode, (pvh/pvshim=1 for i386). NetBSD 10 amd64 is available to customers in PVHVM mode, enabling booting a kernel from the VPS's filesystem. (NetBSD 10 also runs in PV and PVH mode on Panix's infrastructure, but PVHVM mode is preferred because it allows easy user control over the kernel.)

tornadovps.com

tornadovps.com provides 9.3, 9.1 and 8.2 (amd64) and 9.1 and 8.2 (i386) netboot installers. Users can use grub2 or pvgrub to boot their own kernel (pvgrub needs a small FAT32 /boot). See the tornadovps.com NetBSD instructions.

The main path for NetBSD is PV mode; HVM modes are experimental.

See the "Grant Table Hypervisor Bug" above.

precedence.co.uk

Precedence Technologies provide Xen-based NetBSD hosting. See their dedicated hosting page for some details.

providers that do not document support for NetBSD

This section contains links to pages explaining how to run NetBSD via Xen at providers that do not document support for NetBSD.