Xen is a Type 1 hypervisor which supports running multiple guest operating systems on a single physical machine. One uses the Xen kernel to control the CPU, memory and console, a dom0 operating system which mediates access to other hardware (e.g., disks, network, USB), and one or more domU operating systems which operate in an unprivileged virtualized environment. IO requests from the domU systems are forwarded by the Xen hypervisor to the dom0 to be fulfilled.
This document provides status on what Xen things work on NetBSD (upstream documentation might say something works if it works on some particular Linux system).
This document is also a HOWTO that presumes a basic familiarity with the Xen system architecture, with installing NetBSD on amd64 hardware, and with installing software from pkgsrc. See also the Xen website.
If this document says that something works, and you find that it does not, it is best to ask on port-xen and if you are correct to file a PR.
Overview
The basic concept of Xen is that the hypervisor (xenkernel) runs on the hardware, and runs a privileged domain ("dom0") that can access disks/networking/etc. One then runs additional unprivileged domains (each a "domU"), presumably to do something useful.
This HOWTO addresses how to run a NetBSD dom0 (and hence also build xen itself). It also addresses how to run domUs in that environment, and how to deal with having a domU in a Xen environment run by someone else and/or not running NetBSD.
There are many choices one can make; the HOWTO recommends the standard approach and limits discussion of alternatives in many cases.
Guest Styles
Xen supports different styles of guests. See https://wiki.xenproject.org/wiki/Virtualization_Spectrum for a discussion.
This table shows the styles, and if a NetBSD dom0 can run in that style, if a NetBSD dom0 can sypport that style of guest in a domU, and if NetBSD as a domU can support that style.
Style of guest | dom0 can be? | dom0 can support? | domU can be? |
---|---|---|---|
PV | yes | yes | yes |
HVM | N/A | yes | yes |
PVHVM | N/A | yes | current only |
PVH | not yet | current only | current only |
In PV (paravirtualized) mode, the guest OS does not attempt to access hardware directly, but instead makes hypercalls to the hypervisor; PV guests must be specifically coded for Xen. See PV.
In HVM (Hardware Virtual Machine) mode, no guest modification is required. However, hardware support is required, such as VT-x on Intel CPUs and SVM on AMD CPUs to assist with the processor emulation. The dom0 runs qemu to emulate hardware other than the processor. It is therefore non-sensical to have an HVM dom0, because there is no underlying system to provide emulation.
In PVHVM mode, the guest runs as HVM, but additionally uses PV drivers for efficiency. Therefore it is non-sensical for to have a PVHVM dom0. See PV on HVM.
There have been two PVH modes: original PVH and PVHv2. Original PVH was based on PV mode and is no longer relevant at all. Therefore PVHv2 is written as PVH, here and elsewhere. PVH is basically lightweight HVM with PV drivers. A critical feature of it is that qemu is not needed; the hypervisor can do the emulation that is required. Thus, a dom0 can be PVH. The source code uses PVH and config files use pvh, but NB that this refers to PVHv2. See PVH(v2).
At system boot, the dom0 kernel is loaded as a module with Xen as the kernel. The dom0 can start one or more domUs. (Booting is explained in detail in the dom0 section.)
CPU Architecture
Xen runs on x86_64 hardware (the NetBSD amd64 port).
There is a concept of Xen running on ARM, but there are no reports of this working with NetBSD.
The dom0 system should be amd64. (Instructions for i386PAE dom0 have been removed from the HOWTO.)
The domU can be i386 PAE or amd64. i386 PAE at one point was considered as faster than amd64. However, as of 2021 it is normal to use amd64 as the domU architecture, and use of i386 is dwindling.
Xen Versions
In NetBSD, Xen is provided in pkgsrc, via matching pairs of packages xenkernel and xentools. We will refer only to the kernel versions, but note that both packages must be installed together and must have matching versions.
Versions available in pkgsrc:
Xen Version | Package Name | Xen CPU Support | EOL'ed By Upstream |
---|---|---|---|
4.13 | xenkernel413 | x86_64 | No |
4.15 | xenkernel415 | x86_64 | No |
See also the Xen Security Advisory page.
Older Xen had a python-based management tool called xm; this has been replaced by xl.
NetBSD versions
Xen has been supported in NetBSD for a long time, at least since 2005. Initially Xen was PV only.
NetBSD Xen has always supported PV, in both dom0 and domU; for a long time this was the only way. NetBSD >=8 as a dom0 supports HVM mode in domUs.
Support for PVHVM and PVH is available only in NetBSD-current; this is currently somewhat experimental, although PVHVM appears reasonably solid.
NetBSD up to and including NetBSD 9 as a dom0 cannot safely run SMP. Even if one added "options MULTIPROCESSOR" and configured multiple vcpus, the kernel is likely to crash because of drivers without adequate locking.
NetBSD-current supports SMP in dom0, and XEN3_DOM0 includes "options MULTIPROCESSOR".
NetBSD (since NetBSD 6), when run as a domU, can run SMP, using multiple CPUs if provided. The XEN3_DOMU kernel is built with "options MULITPROCESSOR".
Note that while Xen 4.15 is current, the kernel support is still called XEN3, because the hypercall interface has not changed significantly.
Creating a NetBSD dom0
In order to install a NetBSD as a dom0, one first installs a normal NetBSD system, and then pivot the install to a dom0 install by changing the kernel and boot configuration.
NB: As of 2021-04, you must arrange to have the system use BIOS boot, not EFI boot.
In 2018-05, trouble booting a dom0 was reported with 256M of RAM: with 512M it worked reliably. This does not make sense, but if you see "not ELF" after Xen boots, try increasing dom0 RAM.
Installation of NetBSD
Install NetBSD/amd64 just as you would if you were not using Xen. Therefore, use the most recent release, or a build from the most recent stable branch. Alternatively, use -current, being mindful of all the usual caveats of lower stability of current, and likely a bit more so. Think about how you will provide storage for disk images.
Installation of Xen
Building Xen
Use the most recent version of Xen in pkgsrc, unless the DESCR says that it is not suitable. Therefore, choose 4.15. In the dom0, install xenkernel413 and xentools413 from pkgsrc.
Once this is done, copy the Xen kernel from where pkgsrc puts it to where the boot process will be able to find it:
# cp -p /usr/pkg/xen413-kernel/xen.gz /
Then, place a NetBSD XEN3_DOM0 kernel in the /
directory. Such
kernel can either be taken from a local release build.sh run, compiled
manually, or downloaded from the NetBSD FTP, for example at:
ftp.netbsd.org/pub/NetBSD/NetBSD-9.1/amd64/binary/kernel/netbsd-XEN3_DOM0.gz
Configuring booting
Read boot.cfg(8) carefully. Add lines to /boot.cfg to boot Xen, adjusting for your root filesystem:
This specifies that the dom0 should have 512MB of ram, leaving the rest to be allocated for domUs.
NB: This says add, not replace, so that you will be able to more easily boot a NetBSD kernel without Xen. Once Xen boots ok, you may want to set it as default. It is highly likely that you will have trouble at some point, and keeping an up-to-date GENERIC for use in fixing problems is the standard prudent approach.
\todo Explain why rndseed is not set with Xen as part of the dom0 subconfiguration.
Note that you are likely to have to set root= because the boot device from /boot is not passed via Xen to the dom0 kernel. With one disk, it will work, but e.g. plugging in USB disk to a machine with root on wd0a causes boot to fail.
Beware that userconf statements must be attached to the dom0 load, and
may not be at top-level, because then they would try to configure the
hypervisor, if there is a way to pass them via multiboot. It appears
that adding userconf=pckbc
to /boot.cfg
causes Xen to crash very
early with a heap overflow.
Console selection
See boot_console(8). Understand that you should start from a place of having console setup correct for booting GENERIC before trying to configure Xen.
Generally for GENERIC, one sets the console in bootxx_ffsv1 or equivalent, and this is passed on to /boot (where one typically does not set the console). This configuration of bootxx_ffsv1 should also be in place for Xen systems, to allow seeing messages from /boot and use of a keyboard to select a line from the menu. And, one should have a working boot path to GENERIC for rescue situations.
With GENERIC, the boot options are passed on to /netbsd, but there is currently no mechanism to pass these via multiboot to the hypervisor. Thus, in addition to configuring the console in the boot blocks, one must also configure it for Xen.
By default, the hypervisor (Xen itself) will use some sort of vga device as the console, much like GENERIC uses by default. The vga console is relinquished at the conclusion of hypervisor boot, before the dom0 is started. Xen when using a vga console does not process console input.
The hypervisor can be configured to use a serial port console, e.g.
This example uses the first serial port (Xen counts from 1; this is what NetBSD would call com0), and sets speed and parity. (The dom0 is then configured to use the same serial port in this example.)
With the hypervisor configured for a serial console, it can get input, and there is a notion of passing this input to the dom0. \todo Explain why, if Xen has a serial console, the dom0 console is typically also configured to open that same serial port, instead of getting the passthrough input via the xen console.
One also configures the console for the dom0. While one might expect console=pc to be default, following behavior of GENERIC, a hasty read of the code suggests there is no default and booting without a selected console might lead to a panic. Also, there is merit in explicit configuration. Therefore the standard approach is to place console=pc as part of the load statement for the dom0 kernel, or alternatively console=com0.
The NetBSD dom0 kernel will attach xencons(4) (the man page does not
exist), but this is not used as a console. It is used to obtain the
messages from the hypervisor's console; run xl dmesg
to see them.
Tuning
In an attempt to add performance, one can also add dom0_max_vcpus=1
dom0_vcpus_pin
, to force only one vcpu to be provided (since NetBSD
dom0 can't use more) and to pin that vcpu to a physical CPU. Xen has
many boot
options,
and other than dom0 memory and max_vcpus, they are generally not
necessary.
\todo Revisit this advice with current. \todo Explain if anyone has ever actually measured that this helps.
rc.conf
Ensure that the boot scripts installed in
/usr/pkg/share/examples/rc.d
are in /etc/rc.d
, either because you
have PKG_RCD_SCRIPTS=yes
, or manually. (This is not special to Xen,
but a normal part of pkgsrc usage.)
Set xencommons=YES
in rc.conf:
\todo Recommend for/against xen-watchdog.
Testing
Now, reboot so that you are running a DOM0 kernel under Xen, rather than GENERIC without Xen.
Once the reboot is done, use xl
to inspect Xen's boot messages,
available resources, and running domains. For example:
# xl dmesg ... xen's boot info ... # xl info ... available memory, etc ... # xl list Name Id Mem(MB) CPU State Time(s) Console Domain-0 0 64 0 r---- 58.1
Xen logs will be in /var/log/xen.
Issues with xencommons
xencommons
starts xenstored
, which stores data on behalf of dom0 and
domUs. It does not currently work to stop and start xenstored.
Certainly all domUs should be shutdown first, following the sort order
of the rc.d scripts. However, the dom0 sets up state with xenstored,
and is not notified when xenstored exits, leading to not recreating
the state when the new xenstored starts. Until there's a mechanism to
make this work, one should not expect to be able to restart xenstored
(and thus xencommons). There is currently no reason to expect that
this will get fixed any time soon.
\todo Confirm if this is still true in 2020.
Xen-specific NetBSD issues
There are (at least) two additional things different about NetBSD as a dom0 kernel compared to hardware.
One is that through NetBSD 9 the module ABI is different because some of the #defines change, so there are separate sets of modules in /stand. (Further, zfs in Xen is troubled because of differing MAXPHYS; see the zfs howto for more.) In NetBSD-current, there is only one set of modules.
The other difference is that XEN3_DOM0 does not have exactly the same options as GENERIC. While this is roughly agreed to be in large part a bug, users should be aware of this and can simply add missing config items if desired.
Finally, there have been occasional reports of trouble with X11 servers in NetBSD as a dom0. Some hardware support is intentionally disabled in XEN3_DOM0.
Updating Xen in a dom0
Note the previous advice to maintain a working and tested boot config into GENERIC without Xen.
Updating Xen in a dom0 consists of updating the xnekernel and xentools packages, along with copying the xen.gz into place, and of course rebooting.
If updating along a Xen minor version, e.g. from 4.13.1 to 4.13.2, or from 4.13.2nb1 to 4.13.2nb3, it is very likely that this can be done on a running system. The point is that the xentools programs will be replaced, and you will be using "xl" from the new installation to talk to the older programs which are still running. Problems from this update path should be reported.
For added safety, shutdown all domUs before updating, to remove the need for new xl to talk to old xenstored. Note that Xen does not guarantee stability of internal ABIs.
If updating across Xen minor versions, e.g. from 4.13 to 4.15, the likelihood of trouble is increased. Therefore, 'make replace' of xentools on a dom0 with running domUs is not recommended. A shutdown on all domUs before replacing xentools is likely sufficient. A safer appraoch is to boot into GENERIC to replace the packages, as then no Xen code will be running. Single user is another option.
Updating NetBSD in a dom0
This is just like updating NetBSD on bare hardware, assuming the new
version supports the version of Xen you are running. Generally, one
replaces the kernel and reboots, and then overlays userland binaries
and adjusts /etc
.
Note that one should update both the non-Xen kernel typically used for rescue purposes, as well as the DOM0 kernel used with Xen.
anita (for testing NetBSD)
With a NetBSD dom0, even without any domUs, one can run anita (see pkgsrc/misc/py-anita) to test NetBSD releases, by doing (as root, because anita must create a domU):
anita --vmm=xl test file:///usr/obj/i386/
Unprivileged domains (domU)
This section describes general concepts about domUs. It does not
address specific domU operating systems or how to install them. The
config files for domUs are typically in /usr/pkg/etc/xen
, and are
typically named so that the file name, domU name and the domU's host
name match.
The domU is provided with CPU and memory by Xen, configured by the dom0. The domU is provided with disk and network by the dom0, mediated by Xen, and configured in the dom0.
Entropy in domUs can be an issue; physical disks and network are on the dom0. NetBSD's /dev/random system works, but is often challenged.
Config files
See /usr/pkg/share/examples/xen/xlexample* for a very small number of examples for running GNU/Linux.
The following is an example minimal domain configuration file. The domU serves as a network file server.
The domain will have name given in the name
setting. The kernel has the
host/domU name in it, so that on the dom0 one can update the various
domUs independently. The vif
line causes an interface to be provided,
with a specific mac address (do not reuse MAC addresses!), in bridge
mode. Two disks are provided, and they are both writable; the bits
are stored in files and Xen attaches them to a vnd(4) device in the
dom0 on domain creation. The system treats xbd0 as the boot device
without needing explicit configuration.
There is not a type line; that implicitly defines a pv domU. Otherwise, one sets type to the lower-case version of the domU type in the table above; see later sections.
By convention, domain config files are kept in /usr/pkg/etc/xen
. Note
that "xl create" takes the name of a config file, while other commands
take the name of a domain.
Examples of commands:
xl create /usr/pkg/etc/xen/foo xl console domU-id xl create -c /usr/pkg/etc/xen/foo xl shutdown domU-id xl list
Typing ^]
will exit the console session. Shutting down a domain is
equivalent to pushing the power button; a NetBSD domU will receive a
power-press event and do a clean shutdown. Shutting down the dom0
will trigger controlled shutdowns of all configured domUs.
CPU and memory
A domain is provided with some number of vcpus; any domain can have up to the number of CPUs seen by the hypervisor. For a domU, it is controlled from the config file by the "vcpus = N" directive. It is normal to overcommit vcpus; a 4-core machine machine might well provide 4 vcpus to each domU. One might also configure fewer vcpus for a domU.
A domain is provided with memory; this is controlled in the config file by "memory = N" (in megabytes). In the straightforward case, the sum of the the memory allocated to the dom0 and all domUs must be less than the available memory.
Balloon driver
Xen provides a balloon
driver, which can be used to let domains use
more memory temporarily.
\todo Explain how to set up a aystem to use the balloon scheme in a useful manner.
Virtual disks
In domU config files, the disks are defined as a sequence of 3-tuples:
The first element is "method:/path/to/disk". Common methods are "file:" for a file-backed vnd, and "phy:" for something that is already a device, such as an LVM logical volume.
The second element is an artifact of how virtual disks are passed to Linux, and a source of confusion with NetBSD Xen usage. Linux domUs are given a device name to associate with the disk, and values like "hda1" or "sda1" are common. In a NetBSD domU, the first disk appears as xbd0, the second as xbd1, and so on. However, xl demands a second argument. The name given is converted to a major/minor by calling stat(2) on the name in /dev and this is passed to the domU. In the general case, the dom0 and domU can be different operating systems, and it is an unwarranted assumption that they have consistent numbering in /dev, or even that the dom0 OS has a /dev. With NetBSD as both dom0 and domU, using values of 0x0 for the first disk and 0x1 for the second works fine and avoids this issue. For a GNU/Linux guest, one can create /dev/hda1 in /dev, or to pass 0x301 for /dev/hda1.
The third element is "w" for writable disks, and "r" for read-only disks.
Example:
Note that NetBSD by default creates only vnd[0123]. If you need more than 4 total virtual disks at a time, run e.g. "./MAKEDEV vnd4" in the dom0.
Virtual Networking
Xen provides virtual Ethernets, each of which connects the dom0 and a domU. For each virtual network, there is an interface "xvifN.M" in the dom0, and a matching interface xennetM (NetBSD name) in domU index N. The interfaces behave as if there is an Ethernet with two adapters connected. From this primitive, one can construct various configurations. We focus on two common and useful cases for which there are existing scripts: bridging and NAT.
With bridging (in the example above), the domU perceives itself to be on the same network as the dom0. For server virtualization, this is usually best. Bridging is accomplished by creating a bridge(4) device and adding the dom0's physical interface and the various xvifN.0 interfaces to the bridge. One specifies "bridge=bridge0" in the domU config file. The bridge must be set up already in the dom0; an example /etc/ifconfig.bridge0 is:
With NAT, the domU perceives itself to be behind a NAT running on the dom0. This is often appropriate when running Xen on a workstation. TODO: NAT appears to be configured by "vif = [ '' ]".
The MAC address specified is the one used for the interface in the new domain. The interface in dom0 will use this address XOR'd with 00:00:00:01:00:00. Random MAC addresses are assigned if not given.
Starting domains automatically
To start domains domU-netbsd
and domU-linux
at boot and shut them
down cleanly on dom0 shutdown, add the following in rc.conf:
domU setup for specific systems
Creating domUs is almost entirely independent of operating system. We have already presented the basics of config files in the previous system.
Of course, this section presumes that you have a working dom0.
Creating a NetBSD PV domU
See the earlier config file, and adjust memory. Decide on how much storage you will provide, and prepare it (file or LVM).
While the kernel will be obtained from the dom0 file system, the same file should be present in the domU as /netbsd so that tools like savecore(8) can work. (This is helpful but not necessary.)
The kernel must be specifically built for Xen, to use PV interfacesas a domU. NetBSD release builds provide the following kernels:
i386 XEN3PAE_DOMU
amd64 XEN3_DOMU
This will boot NetBSD, but this is not that useful if the disk is empty. One approach is to unpack sets onto the disk outside of Xen (by mounting it, just as you would prepare a physical disk for a system you can't run the installer on).
A second approach is to run an INSTALL kernel, which has a miniroot and can load sets from the network. To do this, copy the INSTALL kernel to / and change the kernel line in the config file to:
kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"
Then, start the domain as "xl create -c configfile".
Alternatively, if you want to install NetBSD/Xen with a CDROM image, the following line should be used in the config file.
disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]
After booting the domain, the option to install via CDROM may be
selected. The CDROM device should be changed to xbd1d
.
Once done installing, "halt -p" the new domain (don't reboot or halt: it would reload the INSTALL_XEN3_DOMU kernel even if you changed the config file), switch the config file back to the XEN3_DOMU kernel, and start the new domain again. Now it should be able to use "root on xbd0a" and you should have a functional NetBSD domU.
TODO: check if this is still accurate.
When the new domain is booting you'll see some warnings about wscons
and the pseudo-terminals. These can be fixed by editing the files
/etc/ttys
and /etc/wscons.conf
. You must disable all terminals in
/etc/ttys
, except console, like this:
console "/usr/libexec/getty Pc" vt100 on secure
ttyE0 "/usr/libexec/getty Pc" vt220 off secure
ttyE1 "/usr/libexec/getty Pc" vt220 off secure
ttyE2 "/usr/libexec/getty Pc" vt220 off secure
ttyE3 "/usr/libexec/getty Pc" vt220 off secure
Finally, all screens must be commented out from /etc/wscons.conf
.
One should also run powerd
in a domU, but this should not need
configuring. With powerd, the domain will run a controlled shutdown
if xl shutdown -R
or xl shutdown -H
is used on the dom0, via
receiving a synthetic power button pressed
signal. In 9 and
current, powerd
is run by default under Xen kernels (or if ACPI is
present), and it can be added to rc.conf if not.
It is not strictly necessary to have a kernel (as /netbsd) in the domU file system. However, various programs (e.g. netstat) will use that kernel to look up symbols to read from kernel virtual memory. If /netbsd is not the running kernel, those lookups will fail. (This is not really a Xen-specific issue, but because the domU kernel is obtained from the dom0, it is far more likely to be out of sync or missing with Xen.)
Note that NetBSD by default creates only xbd[0123]. If you need more virtual disks in a domU, run e.g. "./MAKEDEV xbd4" in the domU.
Creating a Linux PV domU
Creating unprivileged Linux domains isn't much different from unprivileged NetBSD domains, but there are some details to know.
First, the second parameter passed to the disk declaration (the '0x1' in the example below)
disk = [ 'phy:/dev/wd0e,0x1,w' ]
does matter to Linux. It wants a Linux device number here (e.g. 0x300 for hda). Linux builds device numbers as: (major \<< 8 + minor). So, hda1 which has major 3 and minor 1 on a Linux system will have device number 0x301. Alternatively, devices names can be used (hda, hdb, ...) as xentools has a table to map these names to devices numbers. To export a partition to a Linux guest we can use:
disk = [ 'phy:/dev/wd0e,0x300,w' ]
root = "/dev/hda1 ro"
and it will appear as /dev/hda on the Linux system, and be used as root partition.
To install the Linux system on the partition to be exported to the
guest domain, the following method can be used: install
sysutils/e2fsprogs from pkgsrc. Use mke2fs to format the partition
that will be the root partition of your Linux domain, and mount it.
Then copy the files from a working Linux system, make adjustments in
/etc
(fstab, network config). It should also be possible to extract
binary packages such as .rpm or .deb directly to the mounted partition
using the appropriate tool, possibly running under NetBSD's Linux
emulation. Once the file system has been populated, umount it. If
desirable, the file system can be converted to ext3 using tune2fs -j.
It should now be possible to boot the Linux guest domain, using one of
the vmlinuz-*-xenU kernels available in the Xen binary distribution.
To get the Linux console right, you need to add:
extra = "xencons=tty1"
to your configuration since not all Linux distributions auto-attach a tty to the xen console.
Creating a NetBSD HVM domU
Use type='hvm', probably. Use a GENERIC kernel within the disk image.
Creating a NetBSD PVH domU
This only works with a current kernel in the domU.
Use type='pvh'. Probably, use a GENERIC kernel within the disk image, which in current has PV support.
\todo Verify.
\todo Verify if one can have current PVH domU on a 9 dom0.
Creating a Solaris domU
See possibly outdated Solaris domU instructions.
PCI passthrough: Using PCI devices in guest domains
NB: PCI passthrough only works on some Xen versions and as of 2020 it is not clear that it works on any version in pkgsrc. \todo Reports confirming or denying this notion should be sent to port-xen@.
The dom0 can give other domains access to selected PCI devices. This can allow, for example, a non-privileged domain to have access to a physical network interface or disk controller. However, keep in mind that giving a domain access to a PCI device most likely will give the domain read/write access to the whole physical memory, as PCs don't have an IOMMU to restrict memory access to DMA-capable device. Also, it's not possible to export ISA devices to non-dom0 domains, which means that the primary VGA adapter can't be exported. A guest domain trying to access the VGA registers will panic.
If the dom0 is NetBSD, it has to be running Xen 3.1, as support has not been ported to later versions at this time.
For a PCI device to be exported to a domU, is has to be attached to the "pciback" driver in dom0. Devices passed to the dom0 via the pciback.hide boot parameter will attach to "pciback" instead of the usual driver. The list of devices is specified as "(bus:dev.func)", where bus and dev are 2-digit hexadecimal numbers, and func a single-digit number:
pciback.hide=(00:0a.0)(00:06.0)
pciback devices should show up in the dom0's boot messages, and the
devices should be listed in the /kern/xen/pci
directory.
PCI devices to be exported to a domU are listed in the "pci" array of the domU's config file, with the format "0000:bus:dev.func".
pci = [ '0000:00:06.0', '0000:00:0a.0' ]
In the domU an "xpci" device will show up, to which one or more pci buses will attach. Then the PCI drivers will attach to PCI buses as usual. Note that the default NetBSD DOMU kernels do not have "xpci" or any PCI drivers built in by default; you have to build your own kernel to use PCI devices in a domU. Here's a kernel config example; note that only the "xpci" lines are unusual.
include "arch/i386/conf/XEN3_DOMU"
# Add support for PCI buses to the XEN3_DOMU kernel
xpci* at xenbus ?
pci* at xpci ?
# PCI USB controllers
uhci* at pci? dev ? function ? # Universal Host Controller (Intel)
# USB bus support
usb* at uhci?
# USB Hubs
uhub* at usb?
uhub* at uhub? port ? configuration ? interface ?
# USB Mass Storage
umass* at uhub? port ? configuration ? interface ?
wd* at umass?
# SCSI controllers
ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI
# SCSI bus support (for both ahc and umass)
scsibus* at scsi?
# SCSI devices
sd* at scsibus? target ? lun ? # SCSI disk drives
cd* at scsibus? target ? lun ? # SCSI CD-ROM drives
Miscellaneous Information
Nesting under Linux KVM
It is possible to run Xen and a NetBSD dom0 under Linux KVM. One can enable virtio in the dom0 for greater speed.
Nesting under qemu
It is possible to run Xen and a NetBSD dom0 under qemu on NetBSD, and also with nvmm. \todo Check this.
Other nesting
In theory, any full emulation should be able to run Xen and a NetBSD dom0. The HOWTO does not currently have information about Xen XVM mode, Virtualbox, etc.
NetBSD 5 as domU
NetBSD 5 is known to panic. (However, NetBSD 5 systems should be updated to a supported version.)
NetBSD as a domU in a VPS
The bulk of the HOWTO is about using NetBSD as a dom0 on your own hardware. This section explains how to deal with Xen in a domU as a virtual private server where you do not control or have access to the dom0. This is not intended to be an exhaustive list of VPS providers; only a few are mentioned that specifically support NetBSD.
VPS operators provide varying degrees of access and mechanisms for configuration. The big issue is usually how one controls which kernel is booted, because the kernel is nominally in the dom0 file system (to which VPS users do not normally have access). A second issue is how to install NetBSD. A VPS user may want to compile a kernel for security updates, to run npf, run IPsec, or any other reason why someone would want to change their kernel.
One approach is to have an administrative interface to upload a kernel, or to select from a prepopulated list. Other approaches are pygrub (deprecated) and pvgrub, which are ways to have a bootloader obtain a kernel from the domU file system. This is closer to a regular physical computer, where someone who controls a machine can replace the kernel.
A second issue is multiple CPUs. With NetBSD 6, domUs support multiple vcpus, and it is typical for VPS providers to enable multiple CPUs for NetBSD domUs.
Complexities due to Xen changes
Xen has many security advisories and people running Xen systems make different choices.
stub domains
Some (Linux) dom0 systems use something called "stub domains" to isolate qemu from the dom0 system, as a security and reliabilty mechanism when running HVM domUs. Somehow, NetBSD's GENERIC kernel ends up using PIO for disks rather than DMA. Of course, all of this is emulated, but emulated PIO is unusably slow. This problem is not currently understood.
Grant tables
There are multiple versions of using grant tables, and some security advisories have suggested disabling some versions. NetBSD through 9 uses version 1 and NetBSD-current uses version 2. This can lead to "NetBSD current doesn't run on hosting provider X" situations.
\todo Explain better.
Boot methods
pvgrub
pvgrub is a version of grub that uses PV operations instead of BIOS calls. It is booted from the dom0 as the domU kernel, and then reads /grub/menu.lst and loads a kernel from the domU file system.
It appears that grub's FFS code does not support all aspects of modern FFS, but there are also reports that FFSv2 works fine.
pygrub
As of 2014, pygrub seems to be of mostly historical interest. As of 2021, the section should perhaps be outright deleted.
pygrub runs in the dom0 and looks into the domU file system. This implies that the domU must have a kernel in a file system in a format known to pygrub.
pygrub doesn't seem to work to load Linux images under NetBSD dom0, and is inherently less secure than pvgrub due to running inside dom0. For both these reasons, pygrub should not be used, and is only still present so that historical DomU images using it still work.
Specific Providers
The intent is to list providers only if they document support for running NetBSD, and to point to their resources briefly.
panix.com
Panix provides NetBSD as an OS option. See https://www.panix.com/v-colo/nupgrade.html for some information. Users can use pvgrub. Panix reports that pvgrub works with FFsv2 with 16K/2K and 32K/4K block/frag sizes (and hence with defaults from "newfs -O 2"). See Panix's pvgrub page which describes how to boot NetBSD.
prgmr.com
prgmr.com provides released versions of NetBSD/amd64 as installation options. Users can use pvgrub to boot their own kernel, and a small FAT32 /boot is encouraged. See the prgmr.com NetBSD HOWTO (which is in need of updating).
Amazon
See the Amazon EC2 page.