Introduction
============
[![[Xen
screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](http://www.netbsd.org/gallery/in-Action/hubertf-xen.png)
Xen is a hypervisor (or virtual machine monitor) for x86 hardware
(i686-class or higher), which supports running multiple guest
operating systems on a single physical machine. Xen is a Type 1 or
bare-metal hypervisor; one uses the Xen kernel to control the CPU,
memory and console, a dom0 operating system which mediates access to
other hardware (e.g., disks, network, USB), and one or more domU
operating systems which operate in an unprivileged virtualized
environment. IO requests from the domU systems are forwarded by the
hypervisor (Xen) to the dom0 to be fulfilled.
Xen supports two styles of guests. The original is Para-Virtualized
(PV) which means that the guest OS does not attempt to access hardware
directly, but instead makes hypercalls to the hypervisor. This is
analogous to a user-space program making system calls. (The dom0
operating system uses PV calls for some functions, such as updating
memory mapping page tables, but has direct hardware access for disk
and network.) PV guests must be specifically coded for Xen.
The more recent style is HVM, which means that the guest does not have
code for Xen and need not be aware that it is running under Xen.
Attempts to access hardware registers are trapped and emulated. This
style is less efficient but can run unmodified guests.
Generally any amd64 machine will work with Xen and PV guests. In
theory i386 computers without amd64 support can be used for Xen <=
4.2, but we have no recent reports of this working (this is a hint).
For HVM guests, the VT or VMX cpu feature (Intel) or SVM/HVM/VT
(amd64) is needed; "cpuctl identify 0" will show this. TODO: Clean up
and check the above features.
At boot, the dom0 kernel is loaded as a module with Xen as the kernel.
The dom0 can start one or more domUs. (Booting is explained in detail
in the dom0 section.)
NetBSD supports Xen in that it can serve as dom0, be used as a domU,
and that Xen kernels and tools are available in pkgsrc. This HOWTO
attempts to address both the case of running a NetBSD dom0 on hardware
and running domUs under it (NetBSD and other), and also running NetBSD
as a domU in a VPS.
Some versions of Xen support "PCI passthrough", which means that
specific PCI devices can be made available to a specific domU instead
of the dom0. This can be useful to let a domU run X11, or access some
network interface or other peripheral.
NetBSD used to support Xen2; this has been removed.
Prerequisites
-------------
Installing NetBSD/Xen is not extremely difficult, but it is more
complex than a normal installation of NetBSD.
In general, this HOWTO is occasionally overly restrictive about how
things must be done, guiding the reader to stay on the established
path when there are no known good reasons to stray.
This HOWTO presumes a basic familiarity with the Xen system
architecture. This HOWTO presumes familiarity with installing NetBSD
on i386/amd64 hardware and installing software from pkgsrc.
See also the [Xen website](http://www.xenproject.org/).
Versions of Xen and NetBSD
==========================
Most of the installation concepts and instructions are independent
of Xen version and NetBSD version. This section gives advice on
which version to choose. Versions not in pkgsrc and older unsupported
versions of NetBSD are intentionally ignored.
Xen
---
In NetBSD, xen is provided in pkgsrc, via matching pairs of packages
xenkernel and xentools. We will refer only to the kernel versions,
but note that both packages must be installed together and must have
matching versions.
xenkernel3 and xenkernel33 provide Xen 3.1 and 3.3. These no longer
receive security patches and should not be used. Xen 3.1 supports PCI
passthrough. Xen 3.1 supports non-PAE on i386.
xenkernel41 provides Xen 4.1. This is no longer maintained by Xen,
but as of 2014-12 receives backported security patches. It is a
reasonable although trailing-edge choice.
xenkernel42 provides Xen 4.2. This is maintained by Xen, but old as
of 2014-12.
Ideally newer versions of Xen will be added to pkgsrc.
Note that NetBSD support is called XEN3. It works with 3.1 through
4.2 because the hypercall interface has been stable.
Xen command program
-------------------
Early Xen used a program called "xm" to manipulate the system from the
dom0. Starting in 4.1, a replacement program with similar behavior
called "xl" is provided. In 4.2 and later, "xl" is preferred. 4.4 is
the last version that has "xm".
NetBSD
------
The netbsd-5, netbsd-6, netbsd-7, and -current branches are all
reasonable choices, with more or less the same considerations for
non-Xen use. Therefore, netbsd-6 is recommended as the stable version
of the most recent release for production use. For those wanting to
learn Xen or without production stability concerns, netbsd-7 is likely
most appropriate.
As of NetBSD 6, a NetBSD domU will support multiple vcpus. There is
no SMP support for NetBSD as dom0. (The dom0 itself doesn't really
need SMP; the lack of support is really a problem when using a dom0 as
a normal computer.)
Architecture
------------
Xen itself can run on i386 or amd64 machines. (Practically, almost
any computer where one would want to run Xen supports amd64.) If
using an i386 NetBSD kernel for the dom0, PAE is required (PAE
versions are built by default). While i386 dom0 works fine, amd64 is
recommended as more normal.
Xen 4.2 is the last version to support i386 as a host. TODO: Clarify
if this is about the CPU having to be amd64, or about the dom0 kernel
having to be amd64.
One can then run i386 domUs and amd64 domUs, in any combination. If
running an i386 NetBSD kernel as a domU, the PAE version is required.
(Note that emacs (at least) fails if run on i386 with PAE when built
without, and vice versa, presumably due to bugs in the undump code.)
Recommendation
--------------
Therefore, this HOWTO recommends running xenkernel42 (and xentools42),
xl, the NetBSD 6 stable branch, and to use an amd64 kernel as the
dom0. Either the i386 or amd64 of NetBSD may be used as domUs.
Build problems
--------------
Ideally, all versions of Xen in pkgsrc would build on all versions of
NetBSD on both i386 and amd64. However, that isn't the case. Besides
aging code and aging compilers, qemu (included in xentools for HVM
support) is difficult to build. The following are known to fail:
xenkernel3 netbsd-6 i386
xentools42 netbsd-6 i386
The following are known to work:
xenkernel41 netbsd-5 amd64
xentools41 netbsd-5 amd64
xenkernel41 netbsd-6 i386
xentools41 netbsd-6 i386
NetBSD as a dom0
================
NetBSD can be used as a dom0 and works very well. The following
sections address installation, updating NetBSD, and updating Xen.
Note that it doesn't make sense to talk about installing a dom0 OS
without also installing Xen itself. We first address installing
NetBSD, which is not yet a dom0, and then adding Xen, pivoting the
NetBSD install to a dom0 install by just changing the kernel and boot
configuration.
For experimenting with Xen, a machine with as little as 1G of RAM and
100G of disk can work. For running many domUs in productions, far
more will be needed.
Styles of dom0 operation
------------------------
There are two basic ways to use Xen. The traditional method is for
the dom0 to do absolutely nothing other than providing support to some
number of domUs. Such a system was probably installed for the sole
purpose of hosting domUs, and sits in a server room on a UPS.
The other way is to put Xen under a normal-usage computer, so that the
dom0 is what the computer would have been without Xen, perhaps a
desktop or laptop. Then, one can run domUs at will. Purists will
deride this as less secure than the previous approach, and for a
computer whose purpose is to run domUs, they are right. But Xen and a
dom0 (without domUs) is not meaingfully less secure than the same
things running without Xen. One can boot Xen or boot regular NetBSD
alternately with little problems, simply refraining from starting the
Xen daemons when not running Xen.
Note that NetBSD as dom0 does not support multiple CPUs. This will
limit the performance of the Xen/dom0 workstation approach. In theory
the only issue is that the "backend drivers" are not yet MPSAFE:
http://mail-index.netbsd.org/netbsd-users/2014/08/29/msg015195.html
Installation of NetBSD
----------------------
First,
[install NetBSD/amd64](/guide/inst/)
just as you would if you were not using Xen.
However, the partitioning approach is very important.
If you want to use RAIDframe for the dom0, there are no special issues
for Xen. Typically one provides RAID storage for the dom0, and the
domU systems are unaware of RAID. The 2nd-stage loader bootxx_* skips
over a RAID1 header to find /boot from a filesystem within a RAID
partition; this is no different when booting Xen.
There are 4 styles of providing backing storage for the virtual disks
used by domUs: raw partitions, LVM, file-backed vnd(4), and SAN,
With raw partitions, one has a disklabel (or gpt) partition sized for
each virtual disk to be used by the domU. (If you are able to predict
how domU usage will evolve, please add an explanation to the HOWTO.
Seriously, needs tend to change over time.)
One can use [lvm(8)](/guide/lvm/) to create logical devices to use
for domU disks. This is almost as efficient as raw disk partitions
and more flexible. Hence raw disk partitions should typically not
be used.
One can use files in the dom0 filesystem, typically created by dd'ing
/dev/zero to create a specific size. This is somewhat less efficient,
but very convenient, as one can cp the files for backup, or move them
between dom0 hosts.
Finally, in theory one can place the files backing the domU disks in a
SAN. (This is an invitation for someone who has done this to add a
HOWTO page.)
Installation of Xen
-------------------
In the dom0, install sysutils/xenkernel42 and sysutils/xentools42 from
pkgsrc (or another matching pair).
See [the pkgsrc
documentation](http://www.NetBSD.org/docs/pkgsrc/) for help with pkgsrc.
For Xen 3.1, support for HVM guests is in sysutils/xentool3-hvm. More
recent versions have HVM support integrated in the main xentools
package. It is entirely reasonable to run only PV guests.
Next you need to install the selected Xen kernel itself, which is
installed by pkgsrc as "/usr/pkg/xen*-kernel/xen.gz". Copy it to /.
For debugging, one may copy xen-debug.gz; this is conceptually similar
to DIAGNOSTIC and DEBUG in NetBSD. xen-debug.gz is basically only
useful with a serial console. Then, place a NetBSD XEN3_DOM0 kernel
in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz
of a NetBSD build. Both xen and NetBSD may be left compressed. (If
using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.)
With Xen as the kernel, you must provide a dom0 NetBSD kernel to be
used as a module; place this in /. Suitable kernels are provided in
releasedir/binary/kernel:
i386 XEN3_DOM0
i386 XEN3PAE_DOM0
amd64 XEN3_DOM0
The first one is only for use with Xen 3.1 and i386-mode Xen (and you
should not do this). Current Xen always uses PAE on i386, but you
should generally use amd64 for the dom0. In a dom0 kernel, kernfs is
mandatory for xend to comunicate with the kernel, so ensure that /kern
is in fstab. TODO: Say this is default, or file a PR and give a
reference.
Because you already installed NetBSD, you have a working boot setup
with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the
beginning of your root filesystem, /boot present, and likely
/boot.cfg. (If not, fix before continuing!)
See boot.cfg(5) for an example. The basic line is
menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M
which specifies that the dom0 should have 256M, leaving the rest to be
allocated for domUs. In an attempt to add performance, one can also
add
dom0_max_vcpus=1 dom0_vcpus_pin
to force only one vcpu to be provided (since NetBSD dom0 can't use
more) and to pin that vcpu to a physical cpu. TODO: benchmark this.
As with non-Xen systems, you should have a line to boot /netbsd (a
kernel that works without Xen) and fallback versions of the non-Xen
kernel, Xen, and the dom0 kernel.
Using grub (historic)
---------------------
Before NetBSD's native bootloader could support Xen, the use of
grub was recommended. If necessary, see the
[old grub information](/ports/xen/howto-grub/).
The [HowTo on Installing into
RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html)
explains how to set up booting a dom0 with Xen using grub with
NetBSD's RAIDframe. (This is obsolete with the use of NetBSD's native
boot.)
Configuring Xen
---------------
Xen logs will be in /var/log/xen.
Now, you have a system that will boot Xen and the dom0 kernel, and
just run the dom0 kernel. There will be no domUs, and none can be
started because you still have to configure the dom0 tools. The
daemons which should be run vary with Xen version and with whether one
is using xm or xl. Note that xend is for supporting "xm", and should
only be used if you plan on using "xm". Do NOT enable xend if you
plan on using "xl" as it will cause problems.
The installation of NetBSD should already have created devices for xen
(xencons, xenevt), but if they are not present, create them:
cd /dev && sh MAKEDEV xen
TODO: Give 3.1 advice (or remove it from pkgsrc).
For 3.3 (and thus xm), add to rc.conf (but note that you should have
installed 4.1 or 4.2):
xend=YES
xenbackendd=YES
For 4.1 (and thus xm; xl is believed not to work well), add to rc.conf:
xencommons=YES
xend=YES
(If you are using xentools41 from before 2014-12-26, change
rc.d/xendomains to use xm rather than xl.)
For 4.2 with xm, add to rc.conf
xencommons=YES
xend=YES
For 4.2 with xl (preferred), add to rc.conf:
xencommons=YES
TODO: explain if there is a xend replacement
TODO: Recommend for/against xen-watchdog.
After you have configured the daemons and either started them (in the
order given) or rebooted, run the following (or use xl) to inspect
Xen's boot messages, available resources, and running domains:
# xm dmesg
[xen's boot info]
# xm info
[available memory, etc.]
# xm list
Name Id Mem(MB) CPU State Time(s) Console
Domain-0 0 64 0 r---- 58.1
anita (for testing NetBSD)
--------------------------
With the setup so far, one should be able to run anita (see
pkgsrc/sysutils/py-anita) to test NetBSD releases, by doing (as root,
because anita must create a domU):
anita --vmm=xm test file:///usr/obj/i386/
Alternatively, one can use --vmm=xl to use xl-based domU creation instead.
TODO: check this.
Xen-specific NetBSD issues
--------------------------
There are (at least) two additional things different about NetBSD as a
dom0 kernel compared to hardware.
One is that modules are not usable in DOM0 kernels, so one must
compile in what's needed. It's not really that modules cannot work,
but that modules must be built for XEN3_DOM0 because some of the
defines change and the normal module builds don't do this. Basically,
enabling Xen changes the kernel ABI, and the module build system
doesn't cope with this.
The other difference is that XEN3_DOM0 does not have exactly the same
options as GENERIC. While it is debatable whether or not this is a
bug, users should be aware of this and can simply add missing config
items if desired.
Updating NetBSD in a dom0
-------------------------
This is just like updating NetBSD on bare hardware, assuming the new
version supports the version of Xen you are running. Generally, one
replaces the kernel and reboots, and then overlays userland binaries
and adjusts /etc.
Note that one must update both the non-Xen kernel typically used for
rescue purposes and the DOM0 kernel used with Xen.
Converting from grub to /boot
-----------------------------
These instructions were [TODO: will be] used to convert a system from
grub to /boot. The system was originally installed in February of
2006 with a RAID1 setup and grub to boot Xen 2, and has been updated
over time. Before these commands, it was running NetBSD 6 i386, Xen
4.1 and grub, much like the message linked earlier in the grub
section.
# Install mbr bootblocks on both disks.
fdisk -i /dev/rwd0d
fdisk -i /dev/rwd1d
# Install NetBSD primary boot loader (/ is FFSv1) into RAID1 components.
installboot -v /dev/rwd0d /usr/mdec/bootxx_ffsv1
installboot -v /dev/rwd1d /usr/mdec/bootxx_ffsv1
# Install secondary boot loader
cp -p /usr/mdec/boot /
# Create boog.cfg following earlier guidance:
menu=Xen:load /netbsd-XEN3PAE_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M
menu=Xen.ok:load /netbsd-XEN3PAE_DOM0.ok.gz console=pc;multiboot /xen.ok.gz dom0_mem=256M
menu=GENERIC:boot
menu=GENERIC single-user:boot -s
menu=GENERIC.ok:boot netbsd.ok
menu=GENERIC.ok single-user:boot netbsd.ok -s
menu=Drop to boot prompt:prompt
default=1
timeout=30
TODO: actually do this and fix it if necessary.
Updating Xen versions
---------------------
Updating Xen is conceptually not difficult, but can run into all the
issues found when installing Xen. Assuming migration from 4.1 to 4.2,
remove the xenkernel41 and xentools41 packages and install the
xenkernel42 and xentools42 packages. Copy the 4.2 xen.gz to /.
Ensure that the contents of /etc/rc.d/xen* are correct. Enable the
correct set of daemons. Ensure that the domU config files are valid
for the new version.
Unprivileged domains (domU)
===========================
This section describes general concepts about domUs. It does not
address specific domU operating systems or how to install them. The
config files for domUs are typically in /usr/pkg/etc/xen, and are
typically named so that the file name, domU name and the domU's host
name match.
The domU is provided with cpu and memory by Xen, configured by the
dom0. The domU is provided with disk and network by the dom0,
mediated by Xen, and configured in the dom0.
Entropy in domUs can be an issue; physical disks and network are on
the dom0. NetBSD's /dev/random system works, but is often challenged.
Config files
------------
There is no good order to present config files and the concepts
surrounding what is being configured. We first show an example config
file, and then in the various sections give details.
See (at least in xentools41) /usr/pkg/share/examples/xen/xmexample*,
for a large number of well-commented examples, mostly for running
GNU/Linux.
The following is an example minimal domain configuration file
"/usr/pkg/etc/xen/foo". It is (with only a name change) an actual
known working config file on Xen 4.1 (NetBSD 5 amd64 dom0 and NetBSD 5
i386 domU). The domU serves as a network file server.
# -*- mode: python; -*-
kernel = "/netbsd-XEN3PAE_DOMU-i386-foo.gz"
memory = 1024
vif = [ 'mac=aa:00:00:d1:00:09,bridge=bridge0' ]
disk = [ 'file:/n0/xen/foo-wd0,0x0,w',
'file:/n0/xen/foo-wd1,0x1,w' ]
The domain will have the same name as the file. The kernel has the
host/domU name in it, so that on the dom0 one can update the various
domUs independently. The vif line causes an interface to be provided,
with a specific mac address (do not reuse MAC addresses!), in bridge
mode. Two disks are provided, and they are both writable; the bits
are stored in files and Xen attaches them to a vnd(4) device in the
dom0 on domain creation. The system treates xbd0 as the boot device
without needing explicit configuration.
By default xm looks for domain config files in /usr/pkg/etc/xen. Note
that "xm create" takes the name of a config file, while other commands
take the name of a domain. To create the domain, connect to the
console, create the domain while attaching the console, shutdown the
domain, and see if it has finished stopping, do (or xl with Xen >=
4.2):
xm create foo
xm console foo
xm create -c foo
xm shutdown foo
xm list
Typing ^] will exit the console session. Shutting down a domain is
equivalent to pushing the power button; a NetBSD domU will receive a
power-press event and do a clean shutdown. Shutting down the dom0
will trigger controlled shutdowns of all configured domUs.
domU kernels
------------
On a physical computer, the BIOS reads sector 0, and a chain of boot
loaders finds and loads a kernel. Normally this comes from the root
filesystem. With Xen domUs, the process is totally different. The
normal path is for the domU kernel to be a file in the dom0's
filesystem. At the request of the dom0, Xen loads that kernel into a
new domU instance and starts execution. While domU kernels can be
anyplace, reasonable places to store domU kernels on the dom0 are in /
(so they are near the dom0 kernel), in /usr/pkg/etc/xen (near the
config files), or in /u0/xen (where the vdisks are).
Note that loading the domU kernel from the dom0 implies that boot
blocks, /boot, /boot.cfg, and so on are all ignored in the domU.
See the VPS section near the end for discussion of alternate ways to
obtain domU kernels.
CPU and memory
--------------
A domain is provided with some number of vcpus, less than the number
of cpus seen by the hypervisor. (For a dom0, this is controlled by
the boot argument "dom0_max_vcpus=1".) For a domU, it is controlled
from the config file by the "vcpus = N" directive.
A domain is provided with memory; this is controlled in the config
file by "memory = N" (in megabytes). In the straightforward case, the
sum of the the memory allocated to the dom0 and all domUs must be less
than the available memory.
Xen also provides a "balloon" driver, which can be used to let domains
use more memory temporarily. TODO: Explain better, and explain how
well it works with NetBSD.
Virtual disks
-------------
With the file/vnd style, typically one creates a directory,
e.g. /u0/xen, on a disk large enough to hold virtual disks for all
domUs. Then, for each domU disk, one writes zeros to a file that then
serves to hold the virtual disk's bits; a suggested name is foo-xbd0
for the first virtual disk for the domU called foo. Writing zeros to
the file serves two purposes. One is that preallocating the contents
improves performance. The other is that vnd on sparse files has
failed to work. TODO: give working/notworking NetBSD versions for
sparse vnd. Note that the use of file/vnd for Xen is not really
different than creating a file-backed virtual disk for some other
purpose, except that xentools handles the vnconfig commands. To
create an empty 4G virtual disk, simply do
dd if=/dev/zero of=foo-xbd0 bs=1m count=4096
With the lvm style, one creates logical devices. They are then used
similarly to vnds. TODO: Add an example with lvm.
In domU config files, the disks are defined as a sequence of 3-tuples.
The first element is "method:/path/to/disk". Common methods are
"file:" for file-backed vnd. and "phy:" for something that is already
a (TODO: character or block) device.
The second element is an artifact of how virtual disks are passed to
Linux, and a source of confusion with NetBSD Xen usage. Linux domUs
are given a device name to associate with the disk, and values like
"hda1" or "sda1" are common. In a NetBSD domU, the first disk appears
as xbd0, the second as xbd1, and so on. However, xm/xl demand a
second argument. The name given is converted to a major/minor by
calling stat(2) on the name in /dev and this is passed to the domU.
In the general case, the dom0 and domU can be different operating
systems, and it is an unwarranted assumption that they have consistent
numbering in /dev, or even that the dom0 OS has a /dev. With NetBSD
as both dom0 and domU, using values of 0x0 for the first disk and 0x1
for the second works fine and avoids this issue. For a GNU/Linux
guest, one can create /dev/hda1 in /dev, or to pass 0x301 for
/dev/hda1.
The third element is "w" for writable disks, and "r" for read-only
disks.
Virtual Networking
------------------
Xen provides virtual ethernets, each of which connects the dom0 and a
domU. For each virtual network, there is an interface "xvifN.M" in
the dom0, and in domU index N, a matching interface xennetM (NetBSD
name). The interfaces behave as if there is an Ethernet with two
adaptors connected. From this primitive, one can construct various
configurations. We focus on two common and useful cases for which
there are existing scripts: bridging and NAT.
With bridging (in the example above), the domU perceives itself to be
on the same network as the dom0. For server virtualization, this is
usually best. Bridging is accomplished by creating a bridge(4) device
and adding the dom0's physical interface and the various xvifN.0
interfaces to the bridge. One specifies "bridge=bridge0" in the domU
config file. The bridge must be set up already in the dom0; an
example /etc/ifconfig.bridge0 is:
create
up
!brconfig bridge0 add wm0
With NAT, the domU perceives itself to be behind a NAT running on the
dom0. This is often appropriate when running Xen on a workstation.
TODO: NAT appears to be configured by "vif = [ '' ]".
The MAC address specified is the one used for the interface in the new
domain. The interface in dom0 will use this address XOR'd with
00:00:00:01:00:00. Random MAC addresses are assigned if not given.
Sizing domains
--------------
Modern x86 hardware has vast amounts of resources. However, many
virtual servers can function just fine on far less. A system with
256M of RAM and a 4G disk can be a reasonable choice. Note that it is
far easier to adjust virtual resources than physical ones. For
memory, it's just a config file edit and a reboot. For disk, one can
create a new file and vnconfig it (or lvm), and then dump/restore,
just like updating physical disks, but without having to be there and
without those pesky connectors.
Starting domains automatically
------------------------------
To start domains foo at bar at boot and shut them down cleanly on dom0
shutdown, in rc.conf add:
xendomains="foo bar"
TODO: Explain why 4.1 rc.d/xendomains has xl, when one should use xm
on 4.1. Or fix the xentools41 package to have xm
Creating specific unprivileged domains (domU)
=============================================
Creating domUs is almost entirely independent of operating system. We
have already presented the basics of config files. Note that you must
have already completed the dom0 setup so that "xl list" (or "xm list")
works.
Creating an unprivileged NetBSD domain (domU)
---------------------------------------------
See the earlier config file, and adjust memory. Decide on how much
storage you will provide, and prepare it (file or lvm).
While the kernel will be obtained from the dom0 filesystem, the same
file should be present in the domU as /netbsd so that tools like
savecore(8) can work. (This is helpful but not necessary.)
The kernel must be specifically for Xen and for use as a domU. The
i386 and amd64 provide the following kernels:
i386 XEN3_DOMU
i386 XEN3PAE_DOMU
amd64 XEN3_DOMU
Unless using Xen 3.1 (and you shouldn't) with i386-mode Xen, you must
use the PAE version of the i386 kernel.
This will boot NetBSD, but this is not that useful if the disk is
empty. One approach is to unpack sets onto the disk outside of xen
(by mounting it, just as you would prepare a physical disk for a
system you can't run the installer on).
A second approach is to run an INSTALL kernel, which has a miniroot
and can load sets from the network. To do this, copy the INSTALL
kernel to / and change the kernel line in the config file to:
kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"
Then, start the domain as "xl create -c configname".
Alternatively, if you want to install NetBSD/Xen with a CDROM image, the following
line should be used in the config file.
disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]
After booting the domain, the option to install via CDROM may be
selected. The CDROM device should be changed to `xbd1d`.
Once done installing, "halt -p" the new domain (don't reboot or halt,
it would reload the INSTALL_XEN3_DOMU kernel even if you changed the
config file), switch the config file back to the XEN3_DOMU kernel,
and start the new domain again. Now it should be able to use "root on
xbd0a" and you should have a, functional NetBSD domU.
TODO: check if this is still accurate.
When the new domain is booting you'll see some warnings about *wscons*
and the pseudo-terminals. These can be fixed by editing the files
`/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in
`/etc/ttys`, except *console*, like this:
console "/usr/libexec/getty Pc" vt100 on secure
ttyE0 "/usr/libexec/getty Pc" vt220 off secure
ttyE1 "/usr/libexec/getty Pc" vt220 off secure
ttyE2 "/usr/libexec/getty Pc" vt220 off secure
ttyE3 "/usr/libexec/getty Pc" vt220 off secure
Finally, all screens must be commented out from `/etc/wscons.conf`.
It is also desirable to add
powerd=YES
in rc.conf. This way, the domain will be properly shut down if
`xm shutdown -R` or `xm shutdown -H` is used on the dom0.
Your domain should be now ready to work, enjoy.
Creating an unprivileged Linux domain (domU)
--------------------------------------------
Creating unprivileged Linux domains isn't much different from
unprivileged NetBSD domains, but there are some details to know.
First, the second parameter passed to the disk declaration (the '0x1' in
the example below)
disk = [ 'phy:/dev/wd0e,0x1,w' ]
does matter to Linux. It wants a Linux device number here (e.g. 0x300
for hda). Linux builds device numbers as: (major \<\< 8 + minor).
So, hda1 which has major 3 and minor 1 on a Linux system will have
device number 0x301. Alternatively, devices names can be used (hda,
hdb, ...) as xentools has a table to map these names to devices
numbers. To export a partition to a Linux guest we can use:
disk = [ 'phy:/dev/wd0e,0x300,w' ]
root = "/dev/hda1 ro"
and it will appear as /dev/hda on the Linux system, and be used as root
partition.
To install the Linux system on the partition to be exported to the
guest domain, the following method can be used: install
sysutils/e2fsprogs from pkgsrc. Use mke2fs to format the partition
that will be the root partition of your Linux domain, and mount it.
Then copy the files from a working Linux system, make adjustments in
`/etc` (fstab, network config). It should also be possible to extract
binary packages such as .rpm or .deb directly to the mounted partition
using the appropriate tool, possibly running under NetBSD's Linux
emulation. Once the filesystem has been populated, umount it. If
desirable, the filesystem can be converted to ext3 using tune2fs -j.
It should now be possible to boot the Linux guest domain, using one of
the vmlinuz-\*-xenU kernels available in the Xen binary distribution.
To get the linux console right, you need to add:
extra = "xencons=tty1"
to your configuration since not all linux distributions auto-attach a
tty to the xen console.
Creating an unprivileged Solaris domain (domU)
----------------------------------------------
See possibly outdated
[Solaris domU instructions](/ports/xen/howto-solaris/).
PCI passthrough: Using PCI devices in guest domains
---------------------------------------------------
The dom0 can give other domains access to selected PCI
devices. This can allow, for example, a non-privileged domain to have
access to a physical network interface or disk controller. However,
keep in mind that giving a domain access to a PCI device most likely
will give the domain read/write access to the whole physical memory,
as PCs don't have an IOMMU to restrict memory access to DMA-capable
device. Also, it's not possible to export ISA devices to non-dom0
domains, which means that the primary VGA adapter can't be exported.
A guest domain trying to access the VGA registers will panic.
If the dom0 is NetBSD, it has to be running Xen 3.1, as support has
not been ported to later versions at this time.
For a PCI device to be exported to a domU, is has to be attached to
the "pciback" driver in dom0. Devices passed to the dom0 via the
pciback.hide boot parameter will attach to "pciback" instead of the
usual driver. The list of devices is specified as "(bus:dev.func)",
where bus and dev are 2-digit hexadecimal numbers, and func a
single-digit number:
pciback.hide=(00:0a.0)(00:06.0)
pciback devices should show up in the dom0's boot messages, and the
devices should be listed in the `/kern/xen/pci` directory.
PCI devices to be exported to a domU are listed in the "pci" array of
the domU's config file, with the format "0000:bus:dev.func".
pci = [ '0000:00:06.0', '0000:00:0a.0' ]
In the domU an "xpci" device will show up, to which one or more pci
busses will attach. Then the PCI drivers will attach to PCI busses as
usual. Note that the default NetBSD DOMU kernels do not have "xpci"
or any PCI drivers built in by default; you have to build your own
kernel to use PCI devices in a domU. Here's a kernel config example;
note that only the "xpci" lines are unusual.
include "arch/i386/conf/XEN3_DOMU"
# Add support for PCI busses to the XEN3_DOMU kernel
xpci* at xenbus ?
pci* at xpci ?
# PCI USB controllers
uhci* at pci? dev ? function ? # Universal Host Controller (Intel)
# USB bus support
usb* at uhci?
# USB Hubs
uhub* at usb?
uhub* at uhub? port ? configuration ? interface ?
# USB Mass Storage
umass* at uhub? port ? configuration ? interface ?
wd* at umass?
# SCSI controllers
ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI
# SCSI bus support (for both ahc and umass)
scsibus* at scsi?
# SCSI devices
sd* at scsibus? target ? lun ? # SCSI disk drives
cd* at scsibus? target ? lun ? # SCSI CD-ROM drives
NetBSD as a domU in a VPS
=========================
The bulk of the HOWTO is about using NetBSD as a dom0 on your own
hardware. This section explains how to deal with Xen in a domU as a
virtual private server where you do not control or have access to the
dom0.
VPS operators provide varying degrees of access and mechanisms for
configuration. The big issue is usually how one controls which kernel
is booted, because the kernel is nominally in the dom0 filesystem (to
which VPS users do not normally have acesss).
A VPS user may want to compile a kernel for security updates, to run
npf, run IPsec, or any other reason why someone would want to change
their kernel.
One approach is to have an adminstrative interface to upload a kernel,
or to select from a prepopulated list. Other approaches are py-grub
(deprecated) and pvgrub, which are ways to have a bootloader obtain a
kernel from the domU filesystem. This is closer to a regular physical
computer, where someone who controls a machine can replace the kernel.
py-grub
-------
py-grub runs in the dom0 and looks into the domU filesystem. This
implies that the domU must have a kernel in a filesystem in a format
known to py-grub. As of 2014, py-grub seems to be of mostly historical interest.
pvgrub
------
pvgrub is a version of grub that uses PV operations instead of BIOS
calls. It is booted from the dom0 as the domU kernel, and then reads
/grub/menu.lst and loads a kernel from the domU filesystem.
[prgmr.com](http://prgmr.com/) uses this approach to let users choose
their own operating system and kernel. See then [prgmr.com NetBSD
HOWTO](http://wiki.prgmr.com/mediawiki/index.php/NetBSD_as_a_DomU).
Because [grub's FFS code](http://xenbits.xensource.com/hg/xen-unstable.hg/file/bca284f67702/tools/libfsimage/ufs/fsys_ufs.c)
appears not to support all aspects of modern FFS,
typically one has an ext2 or FAT partition for the kernel, so that
grub can understand it, which leads to /netbsd not being the actual
kernel. One must remember to update the special boot partiion.
Amazon
------
TODO: add link to NetBSD amazon howto.
Using npf
---------
In standard kernels, npf is a module, and thus cannot be loadeed in a
DOMU kernel.
TODO: explain how to compile npf into a custom kernel, answering (but
note that the problem was caused by not booting the right kernel):
http://mail-index.netbsd.org/netbsd-users/2014/12/26/msg015576.html
CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb