version 1.37, 2014/12/24 16:06:38
|
version 1.193, 2021/03/03 14:22:24
|
Line 1
|
Line 1
|
Introduction |
[[!meta title="Xen HowTo"]] |
============ |
|
|
|
[![[Xen |
Xen is a Type 1 hypervisor which supports running multiple guest operating |
screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png) |
systems on a single physical machine. One uses the Xen kernel to control the |
|
CPU, memory and console, a dom0 operating system which mediates access to |
Xen is a virtual machine monitor or hypervisor for x86 hardware |
other hardware (e.g., disks, network, USB), and one or more domU operating |
(i686-class or higher), which supports running multiple guest |
systems which operate in an unprivileged virtualized environment. IO requests |
operating systems on a single physical machine. With Xen, one uses |
from the domU systems are forwarded by the Xen hypervisor to the dom0 to be |
the Xen kernel to control the CPU, memory and console, a dom0 |
|
operating system which mediates access to other hardware (e.g., disks, |
|
network, USB), and one or more domU operating systems which operate in |
|
an unprivileged virtualized environment. IO requests from the domU |
|
systems are forwarded by the hypervisor (Xen) to the dom0 to be |
|
fulfilled. |
fulfilled. |
|
|
Xen supports two styles of guests. The original is Para-Virtualized |
This HOWTO presumes a basic familiarity with the Xen system |
(PV) which means that the guest OS does not attempt to access hardware |
architecture, with installing NetBSD on amd64 hardware, and with |
directly, but instead makes hypercalls to the hypervisor. This is |
installing software from pkgsrc. See also the [Xen |
analogous to a user-space program making system calls. (The dom0 |
website](http://www.xenproject.org/). |
operating system uses PV calls for some functions, such as updating |
|
memory mapping page tables, but has direct hardware access for disk |
[[!toc]] |
and network.) PV guests must be specifically coded for Xen. |
|
|
# Overview |
The more recent style is HVM, which means that the guest does not have |
|
code for Xen and need not be aware that it is running under Xen. |
The basic concept of Xen is that the hypervisor (xenkernel) runs on |
Attempts to access hardware registers are trapped and emulated. This |
the hardware, and runs a privileged domain ("dom0") that can access |
style is less efficient but can run unmodified guests. |
disks/networking/etc. One then runs additonal unprivileged domains |
|
(each a "domU"), presumably to do something useful. |
Generally any amd64 machine will work with Xen and PV guests. In |
|
theory i386 computers without amd64 support can be used for Xen <= |
This HOWTO addresses how to run a NetBSD dom0 (and hence also build |
4.2, but we have no recent reports of this working (this is a hint). |
xen itself). It also addresses how to run domUs in that environment, |
For HVM guests, the VT or VMX cpu feature (Intel) or SVM/HVM/VT |
and how to deal with having a domU in a Xen environment run by someone |
(amd64) is needed; "cpuctl identify 0" will show this. TODO: Clean up |
else and/or not running NetBSD. |
and check the above features. |
|
|
There are many choices one can make; the HOWTO recommends the standard |
|
approach and limits discussion of alternatives in many cases. |
|
|
|
## Guest Styles |
|
|
|
Xen supports different styles of guests. |
|
|
|
[[!table data=""" |
|
Style of guest |Supported by NetBSD |
|
PV |Yes (dom0, domU) |
|
HVM |Yes (domU) |
|
PVHVM |current-only (domU) |
|
PVH |current-only (domU, dom0 not yet) |
|
"""]] |
|
|
|
In Para-Virtualized (PV) mode, the guest OS does not attempt to access |
|
hardware directly, but instead makes hypercalls to the hypervisor; PV |
|
guests must be specifically coded for Xen. |
|
See [PV](https://wiki.xen.org/wiki/Paravirtualization_(PV\)). |
|
|
|
In HVM mode, no guest modification is required; however, hardware |
|
support is required, such as VT-x on Intel CPUs and SVM on AMD CPUs. |
|
The dom0 runs qemu to emulate hardware. |
|
|
|
In PVHVM mode, the guest runs as HVM, but additionally can use PV |
|
drivers for efficiency. |
|
See [PV on HVM](https://wiki.xen.org/wiki/PV_on_HVM). |
|
|
|
There have been two PVH modes: original PVH and PVHv2. Original PVH |
|
was based on PV mode and is no longer relevant at all. PVHv2 is |
|
basically lightweight HVM with PV drivers. A critical feature of it |
|
is that qemu is not needed; the hypervisor can do the emulation that |
|
is required. Thus, a dom0 can be PVHv2. |
|
The source code uses PVH and config files use pvh; this refers to PVHv2. |
|
See [PVH(v2)](https://wiki.xenproject.org/wiki/PVH_(v2\)_Domu). |
|
|
At boot, the dom0 kernel is loaded as a module with Xen as the kernel. |
At system boot, the dom0 kernel is loaded as a module with Xen as the kernel. |
The dom0 can start one or more domUs. (Booting is explained in detail |
The dom0 can start one or more domUs. (Booting is explained in detail |
in the dom0 section.) |
in the dom0 section.) |
|
|
NetBSD supports Xen in that it can serve as dom0, be used as a domU, |
## CPU Architecture |
and that Xen kernels and tools are available in pkgsrc. This HOWTO |
|
attempts to address both the case of running a NetBSD dom0 on hardware |
|
and running domUs under it (NetBSD and other), and also running NetBSD |
|
as a domU in a VPS. |
|
|
|
Some versions of Xen support "PCI passthrough", which means that |
|
specific PCI devices can be made available to a specific domU instead |
|
of the dom0. This can be useful to let a domU run X11, or access some |
|
network interface or other peripheral. |
|
|
|
Prerequisites |
|
------------- |
|
|
|
Installing NetBSD/Xen is not extremely difficult, but it is more |
|
complex than a normal installation of NetBSD. |
|
In general, this HOWTO is occasionally overly restrictive about how |
|
things must be done, guiding the reader to stay on the established |
|
path when there are no known good reasons to stray. |
|
|
|
This HOWTO presumes a basic familiarity with the Xen system |
|
architecture. This HOWTO presumes familiarity with installing NetBSD |
|
on i386/amd64 hardware and installing software from pkgsrc. |
|
See also the [Xen website](http://www.xenproject.org/). |
|
|
|
History |
|
------- |
|
|
|
NetBSD used to support Xen2; this has been removed. |
Xen runs on x86_64 hardware (the NetBSD amd64 port). |
|
|
Before NetBSD's native bootloader could support Xen, the use of |
There is a concept of Xen running on ARM, but there are no reports of this working with NetBSD. |
grub was recommended. If necessary, see the |
|
[old grub information](/ports/xen/howto-grub/). |
|
|
|
Versions of Xen and NetBSD |
The dom0 system should be amd64. (Instructions for i386PAE dom0 have been removed from the HOWTO.) |
========================== |
|
|
|
Most of the installation concepts and instructions are independent |
The domU can be i386PAE or amd64. |
of Xen version and NetBSD version. This section gives advice on |
i386PAE at one point was considered as [faster](https://lists.xen.org/archives/html/xen-devel/2012-07/msg00085.html) than amd64. |
which version to choose. Versions not in pkgsrc and older unsupported |
|
versions of NetBSD are intentionally ignored. |
|
|
|
Xen |
## Xen Versions |
--- |
|
|
|
In NetBSD, xen is provided in pkgsrc, via matching pairs of packages |
In NetBSD, Xen is provided in pkgsrc, via matching pairs of packages |
xenkernel and xentools. We will refer only to the kernel versions, |
xenkernel and xentools. We will refer only to the kernel versions, |
but note that both packages must be installed together and must have |
but note that both packages must be installed together and must have |
matching versions. |
matching versions. |
|
|
xenkernel3 and xenkernel33 provide Xen 3.1 and 3.3. These no longer |
Versions available in pkgsrc: |
receive security patches and should not be used. Xen 3.1 supports PCI |
|
passthrough. Xen 3.1 supports non-PAE on i386. |
|
|
|
xenkernel41 provides Xen 4.1. This is no longer maintained by Xen, |
|
but as of 2014-12 receives backported security patches. It is a |
|
reasonable although trailing-edge choice. |
|
|
|
xenkernel42 provides Xen 4.2. This is maintained by Xen, but old as |
|
of 2014-12. |
|
|
|
Ideally newer versions of Xen will be added to pkgsrc. |
|
|
|
Note that NetBSD support is called XEN3. It works with 3.1 through |
|
4.2 because the hypercall interface has been stable. |
|
|
|
Xen command program |
|
------------------- |
|
|
|
Early Xen used a program called "xm" to manipulate the system from the |
|
dom0. Starting in 4.1, a replacement program with similar behavior |
|
called "xl" is provided. In 4.2 and later, "xl" is preferred. 4.4 is |
|
the last version that has "xm". |
|
|
|
NetBSD |
|
------ |
|
|
|
The netbsd-5, netbsd-6, netbsd-7, and -current branches are all |
|
reasonable choices, with more or less the same considerations for |
|
non-Xen use. Therefore, netbsd-6 is recommended as the stable version |
|
of the most recent release for production use. For those wanting to |
|
learn Xen or without production stability concerns, netbsd-7 is likely |
|
most appropriate. |
|
|
|
As of NetBSD 6, a NetBSD domU will support multiple vcpus. There is |
|
no SMP support for NetBSD as dom0. (The dom0 itself doesn't really |
|
need SMP; the lack of support is really a problem when using a dom0 as |
|
a normal computer.) |
|
|
|
Architecture |
|
------------ |
|
|
|
Xen itself can run on i386 or amd64 machines. (Practically, almost |
|
any computer where one would want to run Xen supports amd64.) If |
|
using an i386 NetBSD kernel for the dom0, PAE is required (PAE |
|
versions are built by default). While i386 dom0 works fine, amd64 is |
|
recommended as more normal. |
|
|
|
Xen 4.2 is the last version to support i386 as a host. TODO: Clarify |
|
if this is about the CPU having to be amd64, or about the dom0 kernel |
|
having to be amd64. |
|
|
|
One can then run i386 domUs and amd64 domUs, in any combination. If |
|
running an i386 NetBSD kernel as a domU, the PAE version is required. |
|
(Note that emacs (at least) fails if run on i386 with PAE when built |
|
without, and vice versa, presumably due to bugs in the undump code.) |
|
|
|
Recommendation |
|
-------------- |
|
|
|
Therefore, this HOWTO recommends running xenkernel42 (and xentools42), |
|
xl, the NetBSD 6 stable branch, and to use an amd64 kernel as the |
|
dom0. Either the i386 or amd64 of NetBSD may be used as domUs. |
|
|
|
Build problems |
|
-------------- |
|
|
|
Ideally, all versions of Xen in pkgsrc would build on all versions of |
|
NetBSD on both i386 and amd64. However, that isn't the case. Besides |
|
aging code and aging compilers, qemu (included in xentools for HVM |
|
support) is difficult to build. The following are known to fail: |
|
|
|
xenkernel3 netbsd-6 i386 |
|
xentools42 netbsd-6 i386 |
|
|
|
The following are known to work: |
|
|
|
xenkernel41 netbsd-5 amd64 |
|
xentools41 netbsd-5 amd64 |
|
xenkernel41 netbsd-6 i386 |
|
xentools41 netbsd-6 i386 |
|
|
|
NetBSD as a dom0 |
|
================ |
|
|
|
NetBSD can be used as a dom0 and works very well. The following |
|
sections address installation, updating NetBSD, and updating Xen. |
|
Note that it doesn't make sense to talk about installing a dom0 OS |
|
without also installing Xen itself. We first address installing |
|
NetBSD, which is not yet a dom0, and then adding Xen, pivoting the |
|
NetBSD install to a dom0 install by just changing the kernel and boot |
|
configuration. |
|
|
|
Styles of dom0 operation |
|
------------------------ |
|
|
|
There are two basic ways to use Xen. The traditional method is for |
|
the dom0 to do absolutely nothing other than providing support to some |
|
number of domUs. Such a system was probably installed for the sole |
|
purpose of hosting domUs, and sits in a server room on a UPS. |
|
|
|
The other way is to put Xen under a normal-usage computer, so that the |
|
dom0 is what the computer would have been without Xen, perhaps a |
|
desktop or laptop. Then, one can run domUs at will. Purists will |
|
deride this as less secure than the previous approach, and for a |
|
computer whose purpose is to run domUs, they are right. But Xen and a |
|
dom0 (without domUs) is not meaingfully less secure than the same |
|
things running without Xen. One can boot Xen or boot regular NetBSD |
|
alternately with little problems, simply refraining from starting the |
|
Xen daemons when not running Xen. |
|
|
|
Note that NetBSD as dom0 does not support multiple CPUs. This will |
|
limit the performance of the Xen/dom0 workstation approach. |
|
|
|
Installation of NetBSD |
|
---------------------- |
|
|
|
First, |
|
[install NetBSD/amd64](/guide/inst/) |
|
just as you would if you were not using Xen. |
|
However, the partitioning approach is very important. |
|
|
|
If you want to use RAIDframe for the dom0, there are no special issues |
|
for Xen. Typically one provides RAID storage for the dom0, and the |
|
domU systems are unaware of RAID. The 2nd-stage loader bootxx_* skips |
|
over a RAID1 header to find /boot from a filesystem within a RAID |
|
partition; this is no different when booting Xen. |
|
|
|
There are 4 styles of providing backing storage for the virtual disks |
|
used by domUs: raw partitions, LVM, file-backed vnd(4), and SAN, |
|
|
|
With raw partitions, one has a disklabel (or gpt) partition sized for |
|
each virtual disk to be used by the domU. (If you are able to predict |
|
how domU usage will evolve, please add an explanation to the HOWTO. |
|
Seriously, needs tend to change over time.) |
|
|
|
One can use [lvm(8)](/guide/lvm/) to create logical devices to use |
|
for domU disks. This is almost as efficient as raw disk partitions |
|
and more flexible. Hence raw disk partitions should typically not |
|
be used. |
|
|
|
One can use files in the dom0 filesystem, typically created by dd'ing |
|
/dev/zero to create a specific size. This is somewhat less efficient, |
|
but very convenient, as one can cp the files for backup, or move them |
|
between dom0 hosts. |
|
|
|
Finally, in theory one can place the files backing the domU disks in a |
|
SAN. (This is an invitation for someone who has done this to add a |
|
HOWTO page.) |
|
|
|
Installation of Xen |
|
------------------- |
|
|
|
In the dom0, install sysutils/xenkernel42 and sysutils/xentools42 from |
|
pkgsrc (or another matching pair). |
|
See [the pkgsrc |
|
documentation](http://www.NetBSD.org/docs/pkgsrc/) for help with pkgsrc. |
|
|
|
For Xen 3.1, support for HVM guests is in sysutils/xentool3-hvm. More |
|
recent versions have HVM support integrated in the main xentools |
|
package. It is entirely reasonable to run only PV guests. |
|
|
|
Next you need to install the selected Xen kernel itself, which is |
|
installed by pkgsrc as "/usr/pkg/xen*-kernel/xen.gz". Copy it to /. |
|
For debugging, one may copy xen-debug.gz; this is conceptually similar |
|
to DIAGNOSTIC and DEBUG in NetBSD. xen-debug.gz is basically only |
|
useful with a serial console. Then, place a NetBSD XEN3_DOM0 kernel |
|
in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz |
|
of a NetBSD build. Both xen and NetBSD may be left compressed. (If |
|
using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.) |
|
|
|
In a dom0 kernel, kernfs is mandatory for xend to comunicate with the |
|
kernel, so ensure that /kern is in fstab. |
|
|
|
Because you already installed NetBSD, you have a working boot setup |
|
with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the |
|
beginning of your root filesystem, /boot present, and likely |
|
/boot.cfg. (If not, fix before continuing!) |
|
|
|
See boot.cfg(5) for an example. The basic line is |
|
|
|
menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M |
|
|
|
which specifies that the dom0 should have 256M, leaving the rest to be |
|
allocated for domUs. In an attempt to add performance, one can also |
|
add |
|
|
|
dom0_max_vcpus=1 dom0_vcpus_pin |
[[!table data=""" |
|
Xen Version |Package Name |Xen CPU Support |EOL'ed By Upstream |
|
4.11 |xenkernel411 |x86_64 |No |
|
4.13 |xenkernel413 |x86_64 |No |
|
"""]] |
|
|
to force only one vcpu to be provided (since NetBSD dom0 can't use |
See also the [Xen Security Advisory page](http://xenbits.xen.org/xsa/). |
more) and to pin that vcpu to a physical cpu. TODO: benchmark this. |
|
|
Older Xen had a python-based management tool called xm, now replaced |
|
by xl. |
|
|
|
## NetBSD versions |
|
|
|
Xen has been supported in NetBSD for a long time, at least since 2005. |
|
Initially Xen was PV only. |
|
|
|
NetBSD 8 and up support PV and HVM modes. |
|
|
|
Support for PVHVM and PVH is available only in NetBSD-current. |
|
|
|
NetBSD up to and including NetBSD 9 as a dom0 does not run SMP, |
|
because some drivers are not yet safe for this. NetBSD-current |
|
supports SMP in dom0. |
|
|
|
NetBSD, when run as a domU, can and does typically run SMP. |
|
|
|
Note that while Xen 4.13 is current, the kernel support is still |
|
called XEN3, because the hypercall interface has not changed |
|
significantly. |
|
|
|
# Creating a NetBSD dom0 |
|
|
|
In order to install a NetBSD as a dom0, one first installs a normal |
|
NetBSD system, and then pivot the install to a dom0 install by |
|
changing the kernel and boot configuration. |
|
|
|
In 2018-05, trouble booting a dom0 was reported with 256M of RAM: with |
|
512M it worked reliably. This does not make sense, but if you see |
|
"not ELF" after Xen boots, try increasing dom0 RAM. |
|
|
|
## Installation of NetBSD |
|
|
|
[Install NetBSD/amd64](/guide/inst/) just as you would if you were not |
|
using Xen. Therefore, use the most recent release, or a build from |
|
the most recent stable branch. Alternatively, use -current, being |
|
mindful of all the usual caveats of lower stability of current, and |
|
likely a bit more so. |
|
|
As with non-Xen systems, you should have a line to boot /netbsd (a |
## Installation of Xen |
kernel that works without Xen) and fallback versions of the non-Xen |
|
kernel, Xen, and the dom0 kernel. |
|
|
|
The [HowTo on Installing into |
### Building Xen |
RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html) |
|
explains how to set up booting a dom0 with Xen using grub with |
|
NetBSD's RAIDframe. (This is obsolete with the use of NetBSD's native |
|
boot.) |
|
|
|
Configuring Xen |
Use the most recent version of Xen in pkgsrc, unless the DESCR says that it is not suitable. |
--------------- |
Therefore, choose 4.13. |
|
In the dom0, install xenkernel413 and xentools413 from pkgsrc. |
|
|
Now, you have a system that will boot Xen and the dom0 kernel, and |
Once this is done, copy the Xen kernel from where pkgsrc puts it to |
just run the dom0 kernel. There will be no domUs, and none can be |
where the boot process will be able to find it: |
started because you still have to configure the dom0 tools. The |
|
daemons which should be run vary with Xen version and with whether one |
|
is using xm or xl. Note that xend is for supporting "xm", and should |
|
only be used if you plan on using "xm". Do NOT enable xend if you |
|
plan on using "xl" as it will cause problems. |
|
|
|
TODO: Give 3.1 advice (or remove it from pkgsrc). |
[[!template id=programlisting text=""" |
|
# cp -p /usr/pkg/xen413-kernel/xen.gz / |
|
"""]] |
|
|
For 3.3 (and thus xm), add to rc.conf (but note that you should have |
Then, place a NetBSD XEN3_DOM0 kernel in the `/` directory. Such |
installed 4.1 or 4.2): |
kernel can either be taken from a local release build.sh run, compiled |
|
manually, or downloaded from the NetBSD FTP, for example at: |
|
|
xend=YES |
[[!template id=programlisting text=""" |
xenbackendd=YES |
ftp.netbsd.org/pub/NetBSD/NetBSD-9.1/amd64/binary/kernel/netbsd-XEN3_DOM0.gz |
|
"""]] |
|
|
For 4.1 (and thus xm; xl is believed not to work well), add to rc.conf: |
### Configuring booting |
|
|
xend=YES |
Read boot.cfg(8) carefully. Add lines to /boot.cfg to boot Xen: |
xencommons=YES |
|
|
|
TODO: Explain why if xm is preferred on 4.1, rc.d/xendomains has xl. |
[[!template id=filecontent name="/boot.cfg" text=""" |
Or fix the package. |
menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=512M |
|
menu=Xen single user:load /netbsd-XEN3_DOM0.gz console=pc -s;multiboot /xen.gz dom0_mem=512M |
|
"""]] |
|
|
For 4.2 with xm, add to rc.conf |
This specifies that the dom0 should have 512MB of ram, leaving the rest |
|
to be allocated for domUs. |
|
|
xend=YES |
NB: This says add, not replace, so that you will be able to more |
xencommons=YES |
easily boot a NetBSD kernel without Xen. Once Xen boots ok, you may |
|
want to set it as default. It is highly likely that you will have |
|
trouble at some point, and keeping an up-to-date GENERIC for use in |
|
fixing problems is the standard prudent approach. |
|
|
For 4.2 with xl (preferred), add to rc.conf: |
### Console selection |
|
|
TODO: explain if there is a xend replacement |
See boot_console(8). Understand that you should start from a place of |
xencommons=YES |
having console setup correct for booting GENERIC before trying to |
|
configure Xen. |
|
|
TODO: Recommend for/against xen-watchdog. |
Generally for GENERIC, one sets the console in bootxx_ffsv1 or |
|
equivalent, and this is passed on to /boot (where one typically does |
|
not set the console). This configuration of bootxx_ffsv1 should also |
|
be in place for Xen systems, to allow seeing messages from /boot and |
|
use of a keyboard to select a line from the menu. And, one should |
|
have a working boot path to GENERIC for rescue situations. |
|
|
After you have configured the daemons and rebooted, run the following |
With GENERIC, the boot options are passed on to /netbsd, but there is |
to inspect Xen's boot messages, available resources, and running |
currently no mechanism to pass these via multiboot to the hypervisor. |
domains: |
Thus, in addition to configuring the console in the boot blocks, one |
|
must also configure it for Xen. |
|
|
xm dmesg |
By default, the hypervisor (Xen itself) will use some sort of vga |
xm info |
device as the console, much like GENERIC uses by default. The vga |
xm list |
console is relinquished at the conclusion of hypervisor boot, before |
|
the dom0 is started. |
|
|
Updating NetBSD in a dom0 |
\todo Explain if there is any notion of input to the Xen console; |
------------------------- |
there is something about 3 CTRL-As in a row. Perhaps this is about |
|
the serial console which is not relinquished? |
|
|
|
The hypervisor can be configured to use a serial port console, e.g. |
|
[[!template id=filecontent name="/boot.cfg" text=""" |
|
menu=Xen:losad /netbsd-XEN3_DOM0.gz console=com0;multiboot /xen.gz dom0_mem=512M console=com1 com1=9600,8n1 |
|
"""]] |
|
This exampulee uses the first serial port (Xen counts from 1; this is |
|
what NetBSD would call com0), and sets speed and parity. (The dom0 is |
|
then configured to use the same serial port in this example.) |
|
|
|
One also configures the console for the dom0. While one might expect |
|
console=pc to be default, following behavior of GENERIC, a hasty read |
|
of the code suggests there is no default and booting without a |
|
selected console might lead to a panic. Also, there is merit in |
|
explicit configuration. Therefore the standard approach is to place |
|
console=pc as part of the load statement for the dom0 kernel, or |
|
alternatively console=com0. |
|
|
|
The NetBSD dom0 kernel will attach xencons(4) (the man page does not |
|
exist), but this is not used as a console. It is used to obtain the |
|
messages from the hypervisor's console; run `xl dmesg` to see them. |
|
|
|
### Tuning |
|
|
|
In an attempt to add performance, one can also add `dom0_max_vcpus=1 dom0_vcpus_pin`, |
|
to force only one vcpu to be provided (since NetBSD dom0 can't use |
|
more) and to pin that vcpu to a physical CPU. Xen has |
|
[many boot options](http://xenbits.xenproject.org/docs/4.13-testing/misc/xen-command-line.html), |
|
and other than dom0 memory and max_vcpus, they are generally not |
|
necessary. |
|
\todo Revisit this advice with current. |
|
\todo Explain if anyone has ever actually measured that this helps. |
|
|
|
### rc.conf |
|
|
|
Ensure that the boot scripts installed in |
|
`/usr/pkg/share/examples/rc.d` are in `/etc/rc.d`, either because you |
|
have `PKG_RCD_SCRIPTS=yes`, or manually. (This is not special to Xen, |
|
but a normal part of pkgsrc usage.) |
|
|
|
Set `xencommons=YES` in rc.conf: |
|
|
|
[[!template id=filecontent name="/etc/rc.conf" text=""" |
|
xencommons=YES |
|
"""]] |
|
|
|
\todo Recommend for/against xen-watchdog. |
|
|
|
### Testing |
|
|
|
Now, reboot so that you are running a DOM0 kernel under Xen, rather |
|
than GENERIC without Xen. |
|
|
|
Once the reboot is done, use `xl` to inspect Xen's boot messages, |
|
available resources, and running domains. For example: |
|
|
|
[[!template id=programlisting text=""" |
|
# xl dmesg |
|
... xen's boot info ... |
|
# xl info |
|
... available memory, etc ... |
|
# xl list |
|
Name Id Mem(MB) CPU State Time(s) Console |
|
Domain-0 0 64 0 r---- 58.1 |
|
"""]] |
|
|
|
Xen logs will be in /var/log/xen. |
|
|
|
### Issues with xencommons |
|
|
|
`xencommons` starts `xenstored`, which stores data on behalf of dom0 and |
|
domUs. It does not currently work to stop and start xenstored. |
|
Certainly all domUs should be shutdown first, following the sort order |
|
of the rc.d scripts. However, the dom0 sets up state with xenstored, |
|
and is not notified when xenstored exits, leading to not recreating |
|
the state when the new xenstored starts. Until there's a mechanism to |
|
make this work, one should not expect to be able to restart xenstored |
|
(and thus xencommons). There is currently no reason to expect that |
|
this will get fixed any time soon. |
|
\todo Confirm if this is still true in 2020. |
|
|
|
## Xen-specific NetBSD issues |
|
|
|
There are (at least) two additional things different about NetBSD as a |
|
dom0 kernel compared to hardware. |
|
|
|
One is that through NetBSD 9 the module ABI is different because some |
|
of the #defines change, so there are separate sets of modules in |
|
/stand. In NetBSD-current, there is only one set of modules. |
|
|
|
The other difference is that XEN3_DOM0 does not have exactly the same |
|
options as GENERIC. While it is debatable whether or not this is a |
|
bug, users should be aware of this and can simply add missing config |
|
items if desired. |
|
|
|
Finally, there have been occasional reports of trouble with X11 |
|
servers in NetBSD as a dom0. |
|
|
|
## Updating Xen in a dom0 |
|
|
|
Basically, update the xenkernel and xentools packages and copy the new |
|
Xen kernel into place, and reboot. This procedure should be usable to |
|
update to a new Xen release, but the reader is reminded that having a |
|
non-Xen boot methods was recommended earlier. |
|
|
|
## Updating NetBSD in a dom0 |
|
|
This is just like updating NetBSD on bare hardware, assuming the new |
This is just like updating NetBSD on bare hardware, assuming the new |
version supports the version of Xen you are running. Generally, one |
version supports the version of Xen you are running. Generally, one |
replaces the kernel and reboots, and then overlays userland binaries |
replaces the kernel and reboots, and then overlays userland binaries |
and adjusts /etc. |
and adjusts `/etc`. |
|
|
Note that one must update both the non-Xen kernel typically used for |
|
rescue purposes and the DOM0 kernel used with Xen. |
|
|
|
To convert from grub to /boot, install an mbr bootblock with fdisk, |
|
bootxx_ with installboot, /boot and /boot.cfg. This really should be |
|
no different than completely reinstalling boot blocks on a non-Xen |
|
system. |
|
|
|
Updating Xen versions |
Note that one should update both the non-Xen kernel typically used for |
--------------------- |
rescue purposes, as well as the DOM0 kernel used with Xen. |
|
|
Updating Xen is conceptually not difficult, but can run into all the |
## anita (for testing NetBSD) |
issues found when installing Xen. Assuming migration from 4.1 to 4.2, |
|
remove the xenkernel41 and xentools41 packages and install the |
|
xenkernel42 and xentools42 packages. Copy the 4.2 xen.gz to /. |
|
|
|
Ensure that the contents of /etc/rc.d/xen* are correct. Enable the |
With a NetBSD dom0, even without any domUs, one should be able to run |
correct set of daemons. Ensure that the domU config files are valid |
anita (see pkgsrc/misc/py-anita) to test NetBSD releases, by doing (as |
for the new version. |
root, because anita must create a domU): |
|
|
|
[[!template id=programlisting text=""" |
|
anita --vmm=xl test file:///usr/obj/i386/ |
|
"""]] |
|
|
Unprivileged domains (domU) |
# Unprivileged domains (domU) |
=========================== |
|
|
|
This section describes general concepts about domUs. It does not |
This section describes general concepts about domUs. It does not |
address specific domU operating systems or how to install them. The |
address specific domU operating systems or how to install them. The |
config files for domUs are typically in /usr/pkg/etc/xen, and are |
config files for domUs are typically in `/usr/pkg/etc/xen`, and are |
typically named so that the file anme, domU name and the domU's host |
typically named so that the file name, domU name and the domU's host |
name match. |
name match. |
|
|
The domU is provided with cpu and memory by Xen, configured by the |
The domU is provided with CPU and memory by Xen, configured by the |
dom0. The domU is provided with disk and network by the dom0, |
dom0. The domU is provided with disk and network by the dom0, |
mediated by Xen, and configured in the dom0. |
mediated by Xen, and configured in the dom0. |
|
|
Entropy in domUs can be an issue; physical disks and network are on |
Entropy in domUs can be an issue; physical disks and network are on |
the dom0. NetBSD's /dev/random system works, but is often challenged. |
the dom0. NetBSD's /dev/random system works, but is often challenged. |
|
|
CPU and memory |
## Config files |
-------------- |
|
|
|
A domain is provided with some number of vcpus, less than the |
See /usr/pkg/share/examples/xen/xlexample* |
number of cpus seen by the hypervisor. For a dom0, this is controlled |
for a small number of well-commented examples, mostly for running |
by the boot argument "dom0_max_vcpus=1". For a domU, it is controlled |
GNU/Linux. |
from the config file. |
|
|
The following is an example minimal domain configuration file. The domU |
A domain is provided with memory, In the straightforward case, the sum |
serves as a network file server. |
of the the memory allocated to the dom0 and all domUs must be less |
|
|
[[!template id=filecontent name="/usr/pkg/etc/xen/foo" text=""" |
|
name = "domU-id" |
|
kernel = "/netbsd-XEN3PAE_DOMU-i386-foo.gz" |
|
memory = 1024 |
|
vif = [ 'mac=aa:00:00:d1:00:09,bridge=bridge0' ] |
|
disk = [ 'file:/n0/xen/foo-wd0,0x0,w', |
|
'file:/n0/xen/foo-wd1,0x1,w' ] |
|
"""]] |
|
|
|
The domain will have name given in the `name` setting. The kernel has the |
|
host/domU name in it, so that on the dom0 one can update the various |
|
domUs independently. The `vif` line causes an interface to be provided, |
|
with a specific mac address (do not reuse MAC addresses!), in bridge |
|
mode. Two disks are provided, and they are both writable; the bits |
|
are stored in files and Xen attaches them to a vnd(4) device in the |
|
dom0 on domain creation. The system treats xbd0 as the boot device |
|
without needing explicit configuration. |
|
|
|
There is not type line; that implicitly defines a pv domU. |
|
|
|
By convention, domain config files are kept in `/usr/pkg/etc/xen`. Note |
|
that "xl create" takes the name of a config file, while other commands |
|
take the name of a domain. |
|
|
|
Examples of commands: |
|
|
|
[[!template id=programlisting text=""" |
|
xl create /usr/pkg/etc/xen/foo |
|
xl console domU-id |
|
xl create -c /usr/pkg/etc/xen/foo |
|
xl shutdown domU-id |
|
xl list |
|
"""]] |
|
|
|
Typing `^]` will exit the console session. Shutting down a domain is |
|
equivalent to pushing the power button; a NetBSD domU will receive a |
|
power-press event and do a clean shutdown. Shutting down the dom0 |
|
will trigger controlled shutdowns of all configured domUs. |
|
|
|
## CPU and memory |
|
|
|
A domain is provided with some number of vcpus, up to the number |
|
of CPUs seen by the hypervisor. For a domU, it is controlled |
|
from the config file by the "vcpus = N" directive. |
|
|
|
A domain is provided with memory; this is controlled in the config |
|
file by "memory = N" (in megabytes). In the straightforward case, the |
|
sum of the the memory allocated to the dom0 and all domUs must be less |
than the available memory. |
than the available memory. |
|
|
Xen also provides a "balloon" driver, which can be used to let domains |
Xen also provides a "balloon" driver, which can be used to let domains |
use more memory temporarily. TODO: Explain better, and explain how |
use more memory temporarily. |
well it works with NetBSD. |
|
|
## Virtual disks |
|
|
Virtual disks |
In domU config files, the disks are defined as a sequence of 3-tuples: |
------------- |
|
|
|
With the file/vnd style, typically one creates a directory, |
* The first element is "method:/path/to/disk". Common methods are |
e.g. /u0/xen, on a disk large enough to hold virtual disks for all |
"file:" for a file-backed vnd, and "phy:" for something that is already |
domUs. Then, for each domU disk, one writes zeros to a file that then |
a device, such as an LVM logical volume. |
serves to hold the virtual disk's bits; a suggested name is foo-xbd0 |
|
for the first virtual disk for the domU called foo. Writing zeros to |
* The second element is an artifact of how virtual disks are passed to |
the file serves two purposes. One is that preallocating the contents |
Linux, and a source of confusion with NetBSD Xen usage. Linux domUs |
improves performance. The other is that vnd on sparse files has |
are given a device name to associate with the disk, and values like |
failed to work. TODO: give working/notworking NetBSD versions for |
"hda1" or "sda1" are common. In a NetBSD domU, the first disk appears |
sparse vnd. Note that the use of file/vnd for Xen is not really |
as xbd0, the second as xbd1, and so on. However, xl demands a |
different than creating a file-backed virtual disk for some other |
second argument. The name given is converted to a major/minor by |
purpose, except that xentools handles the vnconfig commands. |
calling stat(2) on the name in /dev and this is passed to the domU. |
|
In the general case, the dom0 and domU can be different operating |
With the lvm style, one creates logical devices. They are then used |
systems, and it is an unwarranted assumption that they have consistent |
similarly to vnds. |
numbering in /dev, or even that the dom0 OS has a /dev. With NetBSD |
|
as both dom0 and domU, using values of 0x0 for the first disk and 0x1 |
Virtual Networking |
for the second works fine and avoids this issue. For a GNU/Linux |
------------------ |
guest, one can create /dev/hda1 in /dev, or to pass 0x301 for |
|
/dev/hda1. |
|
|
|
* The third element is "w" for writable disks, and "r" for read-only |
|
disks. |
|
|
|
Example: |
|
[[!template id=filecontent name="/usr/pkg/etc/xen/foo" text=""" |
|
disk = [ 'file:/n0/xen/foo-wd0,0x0,w' ] |
|
"""]] |
|
|
TODO: explain xvif concept, and that it's general. |
Note that NetBSD by default creates only vnd[0123]. If you need more |
|
than 4 total virtual disks at a time, run e.g. "./MAKEDEV vnd4" in the |
|
dom0. |
|
|
There are two normal styles: bridging and NAT. |
## Virtual Networking |
|
|
With bridging, the domU perceives itself to be on the same network as |
Xen provides virtual Ethernets, each of which connects the dom0 and a |
the dom0. For server virtualization, this is usually best. |
domU. For each virtual network, there is an interface "xvifN.M" in |
|
the dom0, and a matching interface xennetM (NetBSD name) in domU index N. |
|
The interfaces behave as if there is an Ethernet with two |
|
adapters connected. From this primitive, one can construct various |
|
configurations. We focus on two common and useful cases for which |
|
there are existing scripts: bridging and NAT. |
|
|
|
With bridging (in the example above), the domU perceives itself to be |
|
on the same network as the dom0. For server virtualization, this is |
|
usually best. Bridging is accomplished by creating a bridge(4) device |
|
and adding the dom0's physical interface and the various xvifN.0 |
|
interfaces to the bridge. One specifies "bridge=bridge0" in the domU |
|
config file. The bridge must be set up already in the dom0; an |
|
example /etc/ifconfig.bridge0 is: |
|
|
|
[[!template id=filecontent name="/etc/ifconfig.bridge0" text=""" |
|
create |
|
up |
|
!brconfig bridge0 add wm0 |
|
"""]] |
|
|
With NAT, the domU perceives itself to be behind a NAT running on the |
With NAT, the domU perceives itself to be behind a NAT running on the |
dom0. This is often appropriate when running Xen on a workstation. |
dom0. This is often appropriate when running Xen on a workstation. |
|
TODO: NAT appears to be configured by "vif = [ '' ]". |
|
|
One can construct arbitrary other configurations, but there is no |
The MAC address specified is the one used for the interface in the new |
script support. |
domain. The interface in dom0 will use this address XOR'd with |
|
00:00:00:01:00:00. Random MAC addresses are assigned if not given. |
|
|
Sizing domains |
## Starting domains automatically |
-------------- |
|
|
|
Modern x86 hardware has vast amounts of resources. However, many |
To start domains `domU-netbsd` and `domU-linux` at boot and shut them |
virtual servers can function just fine on far less. A system with |
down cleanly on dom0 shutdown, add the following in rc.conf: |
256M of RAM and a 4G disk can be a reasonable choice. Note that it is |
|
far easier to adjust virtual resources than physical ones. For |
|
memory, it's just a config file edit and a reboot. For disk, one can |
|
create a new file and vnconfig it (or lvm), and then dump/restore, |
|
just like updating physical disks, but without having to be there and |
|
without those pesky connectors. |
|
|
|
Config files |
[[!template id=filecontent name="/etc/rc.conf" text=""" |
------------ |
xendomains="domU-netbsd domU-linux" |
|
"""]] |
|
|
TODO: give example config files. Use both lvm and vnd. |
# domU setup for specific systems |
|
|
TODO: explain the mess with 3 arguments for disks and how to cope (0x1). |
Creating domUs is almost entirely independent of operating system. We |
|
have already presented the basics of config files in the previous system. |
|
|
Starting domains |
Of course, this section presumes that you have a working dom0. |
---------------- |
|
|
|
TODO: Explain "xm start" and "xl start". Explain rc.d/xendomains. |
## Creating a NetBSD PV domU |
|
|
TODO: Explain why 4.1 rc.d/xendomains has xl, when one should use xm |
See the earlier config file, and adjust memory. Decide on how much |
on 4.1. |
storage you will provide, and prepare it (file or LVM). |
|
|
Creating specific unprivileged domains (domU) |
While the kernel will be obtained from the dom0 file system, the same |
============================================= |
file should be present in the domU as /netbsd so that tools like |
|
savecore(8) can work. (This is helpful but not necessary.) |
|
|
Creating domUs is almost entirely independent of operating system. We |
The kernel must be specifically built for Xen, to use PV interfacesas |
first explain NetBSD, and then differences for Linux and Solaris. |
a domU. NetBSD release builds provide the following kernels: |
|
|
Creating an unprivileged NetBSD domain (domU) |
i386 XEN3PAE_DOMU |
--------------------------------------------- |
amd64 XEN3_DOMU |
|
|
Once you have *domain0* running, you need to start the xen tool daemon |
This will boot NetBSD, but this is not that useful if the disk is |
(`/usr/pkg/share/examples/rc.d/xend start`) and the xen backend daemon |
empty. One approach is to unpack sets onto the disk outside of Xen |
(`/usr/pkg/share/examples/rc.d/xenbackendd start` for Xen3\*, |
(by mounting it, just as you would prepare a physical disk for a |
`/usr/pkg/share/examples/rc.d/xencommons start` for Xen4.\*). Make sure |
system you can't run the installer on). |
that `/dev/xencons` and `/dev/xenevt` exist before starting `xend`. You |
|
can create them with this command: |
|
|
|
# cd /dev && sh MAKEDEV xen |
|
|
|
xend will write logs to `/var/log/xend.log` and |
|
`/var/log/xend-debug.log`. You can then control xen with the xm tool. |
|
'xm list' will show something like: |
|
|
|
# xm list |
|
Name Id Mem(MB) CPU State Time(s) Console |
|
Domain-0 0 64 0 r---- 58.1 |
|
|
|
'xm create' allows you to create a new domain. It uses a config file in |
|
PKG\_SYSCONFDIR for its parameters. By default, this file will be in |
|
`/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which |
|
will be executed in the new domain (this kernel is in the *domain0* file |
|
system, not on the new domain virtual disk; but please note, you should |
|
install the same kernel into *domainU* as `/netbsd` in order to make |
|
your system tools, like savecore(8), work). A suitable kernel is |
|
provided as part of the i386 and amd64 binary sets: XEN3\_DOMU. |
|
|
|
Here is an /usr/pkg/etc/xen/nbsd example config file: |
|
|
|
# -*- mode: python; -*- |
|
#============================================================================ |
|
# Python defaults setup for 'xm create'. |
|
# Edit this file to reflect the configuration of your system. |
|
#============================================================================ |
|
|
|
#---------------------------------------------------------------------------- |
|
# Kernel image file. This kernel will be loaded in the new domain. |
|
kernel = "/home/bouyer/netbsd-XEN3_DOMU" |
|
#kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU" |
|
|
|
# Memory allocation (in megabytes) for the new domain. |
|
memory = 128 |
|
|
|
# A handy name for your new domain. This will appear in 'xm list', |
|
# and you can use this as parameters for xm in place of the domain |
|
# number. All domains must have different names. |
|
# |
|
name = "nbsd" |
|
|
|
# The number of virtual CPUs this domain has. |
|
# |
|
vcpus = 1 |
|
|
|
#---------------------------------------------------------------------------- |
|
# Define network interfaces for the new domain. |
|
|
|
# Number of network interfaces (must be at least 1). Default is 1. |
|
nics = 1 |
|
|
|
# Define MAC and/or bridge for the network interfaces. |
|
# |
|
# The MAC address specified in ``mac'' is the one used for the interface |
|
# in the new domain. The interface in domain0 will use this address XOR'd |
|
# with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random |
|
# MACs are assigned if not given. |
|
# |
|
# ``bridge'' is a required parameter, which will be passed to the |
|
# vif-script called by xend(8) when a new domain is created to configure |
|
# the new xvif interface in domain0. |
|
# |
|
# In this example, the xvif is added to bridge0, which should have been |
|
# set up prior to the new domain being created -- either in the |
|
# ``network'' script or using a /etc/ifconfig.bridge0 file. |
|
# |
|
vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ] |
|
|
|
#---------------------------------------------------------------------------- |
|
# Define the disk devices you want the domain to have access to, and |
|
# what you want them accessible as. |
|
# |
|
# Each disk entry is of the form: |
|
# |
|
# phy:DEV,VDEV,MODE |
|
# |
|
# where DEV is the device, VDEV is the device name the domain will see, |
|
# and MODE is r for read-only, w for read-write. You can also create |
|
# file-backed domains using disk entries of the form: |
|
# |
|
# file:PATH,VDEV,MODE |
|
# |
|
# where PATH is the path to the file used as the virtual disk, and VDEV |
|
# and MODE have the same meaning as for ``phy'' devices. |
|
# |
|
# VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index), |
|
# but it does for Linux. |
|
# Worse, the device has to exist in /dev/ of domain0, because xm will |
|
# try to stat() it. This means that in order to load a Linux guest OS |
|
# from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ... |
|
# on domain0, with the major/minor from Linux :( |
|
# Alternatively it's possible to specify the device number in hex, |
|
# e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ... |
|
|
|
disk = [ 'phy:/dev/wd0e,0x1,w' ] |
A second approach is to run an INSTALL kernel, which has a miniroot |
#disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ] |
and can load sets from the network. To do this, copy the INSTALL |
#disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ] |
kernel to / and change the kernel line in the config file to: |
|
|
#---------------------------------------------------------------------------- |
kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU" |
# Set the kernel command line for the new domain. |
|
|
|
# Set root device. This one does matter for NetBSD |
Then, start the domain as "xl create -c configfile". |
root = "xbd0" |
|
# extra parameters passed to the kernel |
|
# this is where you can set boot flags like -s, -a, etc ... |
|
#extra = "" |
|
|
|
#---------------------------------------------------------------------------- |
|
# Set according to whether you want the domain restarted when it exits. |
|
# The default is False. |
|
#autorestart = True |
|
|
|
# end of nbsd config file ==================================================== |
|
|
|
When a new domain is created, xen calls the |
|
`/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface |
|
created in *domain0*. This can be used to automatically configure the |
|
xvif?.? interfaces in *domain0*. In our example, these will be bridged |
|
with the bridge0 device in *domain0*, but the bridge has to exist first. |
|
To do this, create the file `/etc/ifconfig.bridge0` and make it look |
|
like this: |
|
|
|
create |
|
!brconfig $int add ex0 up |
|
|
|
(replace `ex0` with the name of your physical interface). Then bridge0 |
|
will be created on boot. See the bridge(4) man page for details. |
|
|
|
So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a |
|
working vif-bridge is also provided with xentools20) configuring: |
|
|
|
#!/bin/sh |
|
#============================================================================ |
|
# $NetBSD: howto.mdwn,v 1.36 2014/12/24 16:02:49 gdt Exp $ |
|
# |
|
# /usr/pkg/etc/xen/vif-bridge |
|
# |
|
# Script for configuring a vif in bridged mode with a dom0 interface. |
|
# The xend(8) daemon calls a vif script when bringing a vif up or down. |
|
# The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp |
|
# in the ``vif-script'' field. |
|
# |
|
# Usage: vif-bridge up|down [var=value ...] |
|
# |
|
# Actions: |
|
# up Adds the vif interface to the bridge. |
|
# down Removes the vif interface from the bridge. |
|
# |
|
# Variables: |
|
# domain name of the domain the interface is on (required). |
|
# vifq vif interface name (required). |
|
# mac vif MAC address (required). |
|
# bridge bridge to add the vif to (required). |
|
# |
|
# Example invocation: |
|
# |
|
# vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0 |
|
# |
|
#============================================================================ |
|
|
|
# Exit if anything goes wrong |
|
set -e |
|
|
|
echo "vif-bridge $*" |
|
|
|
# Operation name. |
|
OP=$1; shift |
|
|
|
# Pull variables in args into environment |
|
for arg ; do export "${arg}" ; done |
|
|
|
# Required parameters. Fail if not set. |
|
domain=${domain:?} |
|
vif=${vif:?} |
|
mac=${mac:?} |
|
bridge=${bridge:?} |
|
|
|
# Optional parameters. Set defaults. |
|
ip=${ip:-''} # default to null (do nothing) |
|
|
|
# Are we going up or down? |
|
case $OP in |
|
up) brcmd='add' ;; |
|
down) brcmd='delete' ;; |
|
*) |
|
echo 'Invalid command: ' $OP |
|
echo 'Valid commands are: up, down' |
|
exit 1 |
|
;; |
|
esac |
|
|
|
# Don't do anything if the bridge is "null". |
|
if [ "${bridge}" = "null" ] ; then |
|
exit |
|
fi |
|
|
|
# Don't do anything if the bridge doesn't exist. |
|
if ! ifconfig -l | grep "${bridge}" >/dev/null; then |
|
exit |
|
fi |
|
|
|
# Add/remove vif to/from bridge. |
|
ifconfig x${vif} $OP |
|
brconfig ${bridge} ${brcmd} x${vif} |
|
|
|
Now, running |
|
|
|
xm create -c /usr/pkg/etc/xen/nbsd |
|
|
|
should create a domain and load a NetBSD kernel in it. (Note: `-c` |
|
causes xm to connect to the domain's console once created.) The kernel |
|
will try to find its root file system on xbd0 (i.e., wd0e) which hasn't |
|
been created yet. wd0e will be seen as a disk device in the new domain, |
|
so it will be 'sub-partitioned'. We could attach a ccd to wd0e in |
|
*domain0* and partition it, newfs and extract the NetBSD/i386 or amd64 |
|
tarballs there, but there's an easier way: load the |
|
`netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets. |
|
Like other install kernels, it contains a ramdisk with sysinst, so you |
|
can install NetBSD using sysinst on your new domain. |
|
|
|
If you want to install NetBSD/Xen with a CDROM image, the following line |
Alternatively, if you want to install NetBSD/Xen with a CDROM image, the following |
should be used in the `/usr/pkg/etc/xen/nbsd` file: |
line should be used in the config file. |
|
|
disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ] |
disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ] |
|
|
After booting the domain, the option to install via CDROM may be |
After booting the domain, the option to install via CDROM may be |
selected. The CDROM device should be changed to `xbd1d`. |
selected. The CDROM device should be changed to `xbd1d`. |
|
|
Once done installing, `halt -p` the new domain (don't reboot or halt, it |
Once done installing, "halt -p" the new domain (don't reboot or halt: |
would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the |
it would reload the INSTALL_XEN3_DOMU kernel even if you changed the |
config file), switch the config file back to the XEN3\_DOMU kernel, and |
config file), switch the config file back to the XEN3_DOMU kernel, |
start the new domain again. Now it should be able to use `root on xbd0a` |
and start the new domain again. Now it should be able to use "root on |
and you should have a second, functional NetBSD system on your xen |
xbd0a" and you should have a functional NetBSD domU. |
installation. |
|
|
|
|
TODO: check if this is still accurate. |
When the new domain is booting you'll see some warnings about *wscons* |
When the new domain is booting you'll see some warnings about *wscons* |
and the pseudo-terminals. These can be fixed by editing the files |
and the pseudo-terminals. These can be fixed by editing the files |
`/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in |
`/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in |
Line 723 and the pseudo-terminals. These can be f
|
Line 542 and the pseudo-terminals. These can be f
|
|
|
Finally, all screens must be commented out from `/etc/wscons.conf`. |
Finally, all screens must be commented out from `/etc/wscons.conf`. |
|
|
It is also desirable to add |
One should also run `powerd` in a domU, but this should not need |
|
configuring. With powerd, the domain will run a controlled shutdown |
powerd=YES |
if `xl shutdown -R` or `xl shutdown -H` is used on the dom0, via |
|
receiving a synthetic `power button pressed` signal. In 9 and |
|
current, `powerd` is run by default under Xen kernels (or if ACPI is |
|
present), and it can be added to rc.conf if not. |
|
|
|
It is not strictly necessary to have a kernel (as /netbsd) in the domU |
|
file system. However, various programs (e.g. netstat) will use that |
|
kernel to look up symbols to read from kernel virtual memory. If |
|
/netbsd is not the running kernel, those lookups will fail. (This is |
|
not really a Xen-specific issue, but because the domU kernel is |
|
obtained from the dom0, it is far more likely to be out of sync or |
|
missing with Xen.) |
|
|
in rc.conf. This way, the domain will be properly shut down if |
Note that NetBSD by default creates only xbd[0123]. If you need more |
`xm shutdown -R` or `xm shutdown -H` is used on the domain0. |
virtual disks in a domU, run e.g. "./MAKEDEV xbd4" in the domU. |
|
|
Your domain should be now ready to work, enjoy. |
## Creating a Linux domU |
|
|
Creating an unprivileged Linux domain (domU) |
|
-------------------------------------------- |
|
|
|
Creating unprivileged Linux domains isn't much different from |
Creating unprivileged Linux domains isn't much different from |
unprivileged NetBSD domains, but there are some details to know. |
unprivileged NetBSD domains, but there are some details to know. |
Line 744 the example below)
|
Line 571 the example below)
|
disk = [ 'phy:/dev/wd0e,0x1,w' ] |
disk = [ 'phy:/dev/wd0e,0x1,w' ] |
|
|
does matter to Linux. It wants a Linux device number here (e.g. 0x300 |
does matter to Linux. It wants a Linux device number here (e.g. 0x300 |
for hda). Linux builds device numbers as: (major \<\< 8 + minor). So, |
for hda). Linux builds device numbers as: (major \<\< 8 + minor). |
hda1 which has major 3 and minor 1 on a Linux system will have device |
So, hda1 which has major 3 and minor 1 on a Linux system will have |
number 0x301. Alternatively, devices names can be used (hda, hdb, ...) |
device number 0x301. Alternatively, devices names can be used (hda, |
as xentools has a table to map these names to devices numbers. To export |
hdb, ...) as xentools has a table to map these names to devices |
a partition to a Linux guest we can use: |
numbers. To export a partition to a Linux guest we can use: |
|
|
disk = [ 'phy:/dev/wd0e,0x300,w' ] |
disk = [ 'phy:/dev/wd0e,0x300,w' ] |
root = "/dev/hda1 ro" |
root = "/dev/hda1 ro" |
|
|
and it will appear as /dev/hda on the Linux system, and be used as root |
and it will appear as /dev/hda on the Linux system, and be used as root |
partition. |
partition. |
|
|
To install the Linux system on the partition to be exported to the guest |
To install the Linux system on the partition to be exported to the |
domain, the following method can be used: install sysutils/e2fsprogs |
guest domain, the following method can be used: install |
from pkgsrc. Use mke2fs to format the partition that will be the root |
sysutils/e2fsprogs from pkgsrc. Use mke2fs to format the partition |
partition of your Linux domain, and mount it. Then copy the files from a |
that will be the root partition of your Linux domain, and mount it. |
working Linux system, make adjustments in `/etc` (fstab, network |
Then copy the files from a working Linux system, make adjustments in |
config). It should also be possible to extract binary packages such as |
`/etc` (fstab, network config). It should also be possible to extract |
.rpm or .deb directly to the mounted partition using the appropriate |
binary packages such as .rpm or .deb directly to the mounted partition |
tool, possibly running under NetBSD's Linux emulation. Once the |
using the appropriate tool, possibly running under NetBSD's Linux |
filesystem has been populated, umount it. If desirable, the filesystem |
emulation. Once the file system has been populated, umount it. If |
can be converted to ext3 using tune2fs -j. It should now be possible to |
desirable, the file system can be converted to ext3 using tune2fs -j. |
boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels |
It should now be possible to boot the Linux guest domain, using one of |
available in the Xen binary distribution. |
the vmlinuz-\*-xenU kernels available in the Xen binary distribution. |
|
|
To get the linux console right, you need to add: |
To get the Linux console right, you need to add: |
|
|
extra = "xencons=tty1" |
extra = "xencons=tty1" |
|
|
to your configuration since not all linux distributions auto-attach a |
to your configuration since not all Linux distributions auto-attach a |
tty to the xen console. |
tty to the xen console. |
|
|
Creating an unprivileged Solaris domain (domU) |
## Creating a NetBSD HVM domU |
---------------------------------------------- |
|
|
Use type='hmv', probably. Use a GENERIC kernel within the disk image. |
|
|
Download an Opensolaris [release](http://opensolaris.org/os/downloads/) |
## Creating a NetBSD PVH domU |
or [development snapshot](http://genunix.org/) DVD image. Attach the DVD |
|
image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem |
Use type='pvh'. |
image to your dom0 filesystem. |
|
|
\todo Explain where the kernel comes from. |
dom0# mkdir /root/solaris |
|
dom0# vnconfig vnd0 osol-1002-124-x86.iso |
## Creating a Solaris domU |
dom0# mount /dev/vnd0a /mnt |
|
|
See possibly outdated |
## for a 64-bit guest |
[Solaris domU instructions](/ports/xen/howto-solaris/). |
dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris |
|
dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris |
## PCI passthrough: Using PCI devices in guest domains |
|
|
## for a 32-bit guest |
NB: PCI passthrough only works on some Xen versions and as of 2020 it |
dom0# cp /mnt/boot/x86.microroot /root/solaris |
is not clear that it works on any version in pkgsrc. Reports |
dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris |
confirming or denying this notion should be sent to port-xen@. |
|
|
dom0# umount /mnt |
The dom0 can give other domains access to selected PCI |
|
devices. This can allow, for example, a non-privileged domain to have |
|
access to a physical network interface or disk controller. However, |
Keep the MAN.VND.4 configured. For some reason the boot process stalls |
keep in mind that giving a domain access to a PCI device most likely |
unless the DVD image is attached to the guest as a "phy" device. Create |
will give the domain read/write access to the whole physical memory, |
an initial configuration file with the following contents. Substitute |
as PCs don't have an IOMMU to restrict memory access to DMA-capable |
*/dev/wd0k* with an empty partition at least 8 GB large. |
device. Also, it's not possible to export ISA devices to non-dom0 |
|
domains, which means that the primary VGA adapter can't be exported. |
memory = 640 |
A guest domain trying to access the VGA registers will panic. |
name = 'solaris' |
|
disk = [ 'phy:/dev/wd0k,0,w' ] |
If the dom0 is NetBSD, it has to be running Xen 3.1, as support has |
disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ] |
not been ported to later versions at this time. |
vif = [ 'bridge=bridge0' ] |
|
kernel = '/root/solaris/unix' |
For a PCI device to be exported to a domU, is has to be attached to |
ramdisk = '/root/solaris/x86.microroot' |
the "pciback" driver in dom0. Devices passed to the dom0 via the |
# for a 64-bit guest |
pciback.hide boot parameter will attach to "pciback" instead of the |
extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom' |
usual driver. The list of devices is specified as "(bus:dev.func)", |
# for a 32-bit guest |
|
#extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom' |
|
|
|
|
|
Start the guest. |
|
|
|
dom0# xm create -c solaris.cfg |
|
Started domain solaris |
|
v3.3.2 chgset 'unavailable' |
|
SunOS Release 5.11 Version snv_124 64-bit |
|
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. |
|
Use is subject to license terms. |
|
Hostname: opensolaris |
|
Remounting root read/write |
|
Probing for device nodes ... |
|
WARNING: emlxs: ddi_modopen drv/fct failed: err 2 |
|
Preparing live image for use |
|
Done mounting Live image |
|
|
|
|
|
Make sure the network is configured. Note that it can take a minute for |
|
the xnf0 interface to appear. |
|
|
|
opensolaris console login: jack |
|
Password: jack |
|
Sun Microsystems Inc. SunOS 5.11 snv_124 November 2008 |
|
jack@opensolaris:~$ pfexec sh |
|
sh-3.2# ifconfig -a |
|
sh-3.2# exit |
|
|
|
|
|
Set a password for VNC and start the VNC server which provides the X11 |
|
display where the installation program runs. |
|
|
|
jack@opensolaris:~$ vncpasswd |
|
Password: solaris |
|
Verify: solaris |
|
jack@opensolaris:~$ cp .Xclients .vnc/xstartup |
|
jack@opensolaris:~$ vncserver :1 |
|
|
|
|
|
From a remote machine connect to the VNC server. Use `ifconfig xnf0` on |
|
the guest to find the correct IP address to use. |
|
|
|
remote$ vncviewer 172.18.2.99:1 |
|
|
|
|
|
It is also possible to launch the installation on a remote X11 display. |
|
|
|
jack@opensolaris:~$ export DISPLAY=172.18.1.1:0 |
|
jack@opensolaris:~$ pfexec gui-install |
|
|
|
|
|
After the GUI installation is complete you will be asked to reboot. |
|
Before that you need to determine the ZFS ID for the new boot filesystem |
|
and update the configuration file accordingly. Return to the guest |
|
console. |
|
|
|
jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs |
|
bootfs = 43 |
|
^C |
|
jack@opensolaris:~$ |
|
|
|
|
|
The final configuration file should look like this. Note in particular |
|
the last line. |
|
|
|
memory = 640 |
|
name = 'solaris' |
|
disk = [ 'phy:/dev/wd0k,0,w' ] |
|
vif = [ 'bridge=bridge0' ] |
|
kernel = '/root/solaris/unix' |
|
ramdisk = '/root/solaris/x86.microroot' |
|
extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"' |
|
|
|
|
|
Restart the guest to verify it works correctly. |
|
|
|
dom0# xm destroy solaris |
|
dom0# xm create -c solaris.cfg |
|
Using config file "./solaris.cfg". |
|
v3.3.2 chgset 'unavailable' |
|
Started domain solaris |
|
SunOS Release 5.11 Version snv_124 64-bit |
|
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. |
|
Use is subject to license terms. |
|
WARNING: emlxs: ddi_modopen drv/fct failed: err 2 |
|
Hostname: osol |
|
Configuring devices. |
|
Loading smf(5) service descriptions: 160/160 |
|
svccfg import warnings. See /var/svc/log/system-manifest-import:default.log . |
|
Reading ZFS config: done. |
|
Mounting ZFS filesystems: (6/6) |
|
Creating new rsa public/private host key pair |
|
Creating new dsa public/private host key pair |
|
|
|
osol console login: |
|
|
|
|
|
Using PCI devices in guest domains |
|
---------------------------------- |
|
|
|
The domain0 can give other domains access to selected PCI devices. This |
|
can allow, for example, a non-privileged domain to have access to a |
|
physical network interface or disk controller. However, keep in mind |
|
that giving a domain access to a PCI device most likely will give the |
|
domain read/write access to the whole physical memory, as PCs don't have |
|
an IOMMU to restrict memory access to DMA-capable device. Also, it's not |
|
possible to export ISA devices to non-domain0 domains (which means that |
|
the primary VGA adapter can't be exported. A guest domain trying to |
|
access the VGA registers will panic). |
|
|
|
This functionality is only available in NetBSD-5.1 (and later) domain0 |
|
and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as |
|
support has not been ported to later versions at this time. |
|
|
|
For a PCI device to be exported to a domU, is has to be attached to the |
|
`pciback` driver in domain0. Devices passed to the domain0 via the |
|
pciback.hide boot parameter will attach to `pciback` instead of the |
|
usual driver. The list of devices is specified as `(bus:dev.func)`, |
|
where bus and dev are 2-digit hexadecimal numbers, and func a |
where bus and dev are 2-digit hexadecimal numbers, and func a |
single-digit number: |
single-digit number: |
|
|
pciback.hide=(00:0a.0)(00:06.0) |
pciback.hide=(00:0a.0)(00:06.0) |
|
|
pciback devices should show up in the domain0's boot messages, and the |
pciback devices should show up in the dom0's boot messages, and the |
devices should be listed in the `/kern/xen/pci` directory. |
devices should be listed in the `/kern/xen/pci` directory. |
|
|
PCI devices to be exported to a domU are listed in the `pci` array of |
PCI devices to be exported to a domU are listed in the "pci" array of |
the domU's config file, with the format `'0000:bus:dev.func'` |
the domU's config file, with the format "0000:bus:dev.func". |
|
|
|
pci = [ '0000:00:06.0', '0000:00:0a.0' ] |
|
|
pci = [ '0000:00:06.0', '0000:00:0a.0' ] |
In the domU an "xpci" device will show up, to which one or more pci |
|
buses will attach. Then the PCI drivers will attach to PCI buses as |
|
usual. Note that the default NetBSD DOMU kernels do not have "xpci" |
|
or any PCI drivers built in by default; you have to build your own |
|
kernel to use PCI devices in a domU. Here's a kernel config example; |
|
note that only the "xpci" lines are unusual. |
|
|
In the domU an `xpci` device will show up, to which one or more pci |
include "arch/i386/conf/XEN3_DOMU" |
busses will attach. Then the PCI drivers will attach to PCI busses as |
|
usual. Note that the default NetBSD DOMU kernels do not have `xpci` or |
|
any PCI drivers built in by default; you have to build your own kernel |
|
to use PCI devices in a domU. Here's a kernel config example: |
|
|
|
include "arch/i386/conf/XEN3_DOMU" |
# Add support for PCI buses to the XEN3_DOMU kernel |
#include "arch/i386/conf/XENU" # in NetBSD 3.0 |
xpci* at xenbus ? |
|
pci* at xpci ? |
|
|
# Add support for PCI busses to the XEN3_DOMU kernel |
# PCI USB controllers |
xpci* at xenbus ? |
uhci* at pci? dev ? function ? # Universal Host Controller (Intel) |
pci* at xpci ? |
|
|
|
# Now add PCI and related devices to be used by this domain |
# USB bus support |
# USB Controller and Devices |
usb* at uhci? |
|
|
# PCI USB controllers |
# USB Hubs |
uhci* at pci? dev ? function ? # Universal Host Controller (Intel) |
uhub* at usb? |
|
uhub* at uhub? port ? configuration ? interface ? |
|
|
# USB bus support |
# USB Mass Storage |
usb* at uhci? |
umass* at uhub? port ? configuration ? interface ? |
|
wd* at umass? |
|
# SCSI controllers |
|
ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI |
|
|
# USB Hubs |
# SCSI bus support (for both ahc and umass) |
uhub* at usb? |
scsibus* at scsi? |
uhub* at uhub? port ? configuration ? interface ? |
|
|
|
# USB Mass Storage |
# SCSI devices |
umass* at uhub? port ? configuration ? interface ? |
sd* at scsibus? target ? lun ? # SCSI disk drives |
wd* at umass? |
cd* at scsibus? target ? lun ? # SCSI CD-ROM drives |
# SCSI controllers |
|
ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI |
|
|
|
# SCSI bus support (for both ahc and umass) |
|
scsibus* at scsi? |
|
|
|
# SCSI devices |
# Miscellaneous Information |
sd* at scsibus? target ? lun ? # SCSI disk drives |
|
cd* at scsibus? target ? lun ? # SCSI CD-ROM drives |
|
|
|
|
## Nesting under Linux KVM |
|
|
NetBSD as a domU in a VPS |
It is possible to run a Xen and a NetBSD dom0 under Linux KVM. One |
========================= |
can enable virtio in the dom0 for greater speed. |
|
|
|
## Other nesting |
|
|
|
In theory, any full emulation should be able to run Xen and a NetBSD |
|
dom0. The HOWTO does not currently have information about Xen XVM |
|
mode, nvmm, qemu, Virtualbox, etc. |
|
|
|
## NetBSD 5 as domU |
|
|
|
[NetBSD 5 is known to panic.](http://mail-index.netbsd.org/port-xen/2018/04/17/msg009181.html) |
|
(However, NetBSD 5 systems should be updated to a supported version.) |
|
|
|
# NetBSD as a domU in a VPS |
|
|
The bulk of the HOWTO is about using NetBSD as a dom0 on your own |
The bulk of the HOWTO is about using NetBSD as a dom0 on your own |
hardware. This section explains how to deal with Xen in a domU as a |
hardware. This section explains how to deal with Xen in a domU as a |
virtual private server where you do not control or have access to the |
virtual private server where you do not control or have access to the |
dom0. |
dom0. This is not intended to be an exhaustive list of VPS providers; |
|
only a few are mentioned that specifically support NetBSD. |
|
|
|
VPS operators provide varying degrees of access and mechanisms for |
|
configuration. The big issue is usually how one controls which kernel |
|
is booted, because the kernel is nominally in the dom0 file system (to |
|
which VPS users do not normally have access). A second issue is how |
|
to install NetBSD. |
|
A VPS user may want to compile a kernel for security updates, to run |
|
npf, run IPsec, or any other reason why someone would want to change |
|
their kernel. |
|
|
|
One approach is to have an administrative interface to upload a kernel, |
|
or to select from a prepopulated list. Other approaches are pygrub |
|
(deprecated) and pvgrub, which are ways to have a bootloader obtain a |
|
kernel from the domU file system. This is closer to a regular physical |
|
computer, where someone who controls a machine can replace the kernel. |
|
|
|
A second issue is multiple CPUs. With NetBSD 6, domUs support |
|
multiple vcpus, and it is typical for VPS providers to enable multiple |
|
CPUs for NetBSD domUs. |
|
|
|
## Complexities due to Xen changes |
|
|
|
Xen has many security advisories and people running Xen systems make |
|
different choices. |
|
|
|
### stub domains |
|
|
|
Some (Linux only?) dom0 systems use something called "stub domains" to |
|
isolate qemu from the dom0 system, as a security and reliabilty |
|
mechanism when running HVM domUs. Somehow, NetBSD's GENERIC kernel |
|
ends up using PIO for disks rather than DMA. Of course, all of this |
|
is emulated, but emulated PIO is unusably slow. This problem is not |
|
currently understood. |
|
|
|
### Grant tables |
|
|
|
There are multiple versions of using grant tables, and some security |
|
advisories have suggested disabling some versions. Some versions of |
|
NetBSD apparently only use specific versions and this can lead to |
|
"NetBSD current doesn't run on hosting provider X" situations. |
|
|
|
\todo Explain better. |
|
|
|
## Boot methods |
|
|
|
### pvgrub |
|
|
|
pvgrub is a version of grub that uses PV operations instead of BIOS |
|
calls. It is booted from the dom0 as the domU kernel, and then reads |
|
/grub/menu.lst and loads a kernel from the domU file system. |
|
|
|
[Panix](http://www.panix.com/) lets users use pvgrub. Panix reports |
|
that pvgrub works with FFsv2 with 16K/2K and 32K/4K block/frag sizes |
|
(and hence with defaults from "newfs -O 2"). See [Panix's pvgrub |
|
page](http://www.panix.com/v-colo/grub.html), which describes only |
|
Linux but should be updated to cover NetBSD :-). |
|
|
|
[prgmr.com](http://prgmr.com/) also lets users with pvgrub to boot |
|
their own kernel. See then [prgmr.com NetBSD |
|
HOWTO](http://wiki.prgmr.com/mediawiki/index.php/NetBSD_as_a_DomU) |
|
(which is in need of updating). |
|
|
|
It appears that [grub's FFS |
|
code](http://xenbits.xensource.com/hg/xen-unstable.hg/file/bca284f67702/tools/libfsimage/ufs/fsys_ufs.c) |
|
does not support all aspects of modern FFS, but there are also reports |
|
that FFSv2 works fine. At prgmr, typically one has an ext2 or FAT |
|
partition for the kernel with the intent that grub can understand it, |
|
which leads to /netbsd not being the actual kernel. One must remember |
|
to update the special boot partition. |
|
|
|
### pygrub |
|
|
|
pygrub runs in the dom0 and looks into the domU file system. This |
|
implies that the domU must have a kernel in a file system in a format |
|
known to pygrub. |
|
|
|
pygrub doesn't seem to work to load Linux images under NetBSD dom0, |
|
and is inherently less secure than pvgrub due to running inside dom0. For both these |
|
reasons, pygrub should not be used, and is only still present so that |
|
historical DomU images using it still work. |
|
|
|
As of 2014, pygrub seems to be of mostly historical |
|
interest. New DomUs should use pvgrub. |
|
|
|
## Specific Providers |
|
|
TODO: Perhaps reference panix, prmgr, amazon as interesting examples. |
### Amazon |
|
|
TODO: Somewhere, discuss pvgrub and py-grub to load the domU kernel |
See the [Amazon EC2 page](/amazon_ec2/). |
from the domU filesystem. |
|