version 1.25, 2014/12/24 01:37:30
|
version 1.180, 2020/11/15 14:31:58
|
Line 1
|
Line 1
|
Introduction |
[[!meta title="Xen HowTo"]] |
============ |
|
|
|
[![[Xen |
Xen is a Type 1 hypervisor which supports running multiple guest operating |
screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png) |
systems on a single physical machine. One uses the Xen kernel to control the |
|
CPU, memory and console, a dom0 operating system which mediates access to |
Xen is a virtual machine monitor or hypervisor for x86 hardware |
other hardware (e.g., disks, network, USB), and one or more domU operating |
(i686-class or higher), which supports running multiple guest |
systems which operate in an unprivileged virtualized environment. IO requests |
operating systems on a single physical machine. With Xen, one uses |
from the domU systems are forwarded by the Xen hypervisor to the dom0 to be |
the Xen kernel to control the CPU, memory and console, a dom0 |
|
operating system which mediates access to other hardware (e.g., disks, |
|
network, USB), and one or more domU operating systems which operate in |
|
an unprivileged virtualized environment. IO requests from the domU |
|
systems are forwarded by the hypervisor (Xen) to the dom0 to be |
|
fulfilled. |
fulfilled. |
|
|
Xen supports two styles of guests. The original is Para-Virtualized |
Xen supports different styles of guests; see [PV on HVM](https://wiki.xen.org/wiki/PV_on_HVM) and [PVH(v2)](https://wiki.xenproject.org/wiki/PVH_(v2\)_Domu) for upstream documentation. |
(PV) which means that the guest OS does not attempt to access hardware |
|
directly, but instead makes hypercalls to the hypervisor. This is |
|
analogous to a user-space program making system calls. (The dom0 |
|
operating system uses PV calls for some functions, such as updating |
|
memory mapping page tables, but has direct hardware access for disk |
|
and network.) PV guests must be specifically coded for Xen. |
|
|
|
The more recent style is HVM, which means that the guest does not have |
|
code for Xen and need not be aware that it is running under Xen. |
|
Attempts to access hardware registers are trapped and emulated. This |
|
style is less efficient but can run unmodified guests. |
|
|
|
Generally any amd64 machine will work with Xen and PV guests. For HVM |
|
guests, the VT or VMX cpu feature (Intel) or SVM/HVM/VT (amd64) is |
|
needed; "cpuctl identify 0" will show this. TODO: Clean up and check |
|
the above features. TODO: Explain if i386 (non-amd64) machines can |
|
still be used --- I think that the requirement to use PAE kernels is |
|
about the hypervisor being amd64 only. |
|
|
|
At boot, the dom0 kernel is loaded as module with Xen as the kernel. |
|
The dom0 can start one or more domUs. (Booting is explained in detail |
|
in the dom0 section.) |
|
|
|
NetBSD supports Xen in that it can serve as dom0, be used as a domU, |
[[!table data=""" |
and that Xen kernels and tools are available in pkgsrc. This HOWTO |
Style of guest |Supported by NetBSD |
attempts to address both the case of running a NetBSD dom0 on hardware |
PV |Yes (dom0, domU) |
and running domUs under it (NetBSD and other), and also running NetBSD |
HVM |Yes (domU) |
as a domU in a VPS. |
PVHVM |current-only (domU) |
|
PVH |current-only (domU, dom0 not yet) |
Some versions of Xen support "PCI passthrough", which means that |
"""]] |
specific PCI devices can be made available to a specific domU instead |
|
of the dom0. This can be useful to let a domU run X11, or access some |
In Para-Virtualized (PV) mode, the guest OS does not attempt to access |
network interface or other peripheral. |
hardware directly, but instead makes hypercalls to the hypervisor; PV |
|
guests must be specifically coded for Xen. |
|
|
|
In HVM mode, no guest modification is required; however, hardware |
|
support is required, such as VT-x on Intel CPUs and SVM on AMD CPUs. |
|
The dom0 runs qemu to emulate hardware. |
|
|
|
In PVHVM mode, the guest runs as HVM, but additionally can use PV |
|
drivers for efficiency. |
|
|
|
There have been two PVH modes: original PVH and PVHv2. Original PVH |
|
was based on PV mode and is no longer relevant at all. PVHv2 is |
|
basically lightweight HVM with PV drivers. A critical feature of it |
|
is that qemu is not needed; the hypervisor can do the emulation that |
|
is required. Thus, a dom0 can be PVHv2. |
|
|
Prerequisites |
The source code uses PVH and config files use pvh; this refers to PVHv2. |
------------- |
|
|
|
Installing NetBSD/Xen is not extremely difficult, but it is more |
At boot, the dom0 kernel is loaded as a module with Xen as the kernel. |
complex than a normal installation of NetBSD. |
The dom0 can start one or more domUs. (Booting is explained in detail |
In general, this HOWTO is occasionally overly restrictive about how |
in the dom0 section.) |
things must be done, guiding the reader to stay on the established |
|
path when there are no known good reasons to stray. |
|
|
|
This HOWTO presumes a basic familiarity with the Xen system |
This HOWTO presumes a basic familiarity with the Xen system |
architecture. This HOWTO presumes familiarity with installing NetBSD |
architecture, with installing NetBSD on i386/amd64 hardware, and with |
on i386/amd64 hardware and installing software from pkgsrc. |
installing software from pkgsrc. See also the [Xen |
See also the [Xen website](http://www.xen.org/). |
website](http://www.xenproject.org/). |
|
|
History |
|
------- |
|
|
|
NetBSD used to support Xen2; this has been removed. |
|
|
|
Before NetBSD's native bootloader could support Xen, the use of |
|
grub was recommended. If necessary, see the |
|
[old grub information](/xen/howto-grub/). |
|
|
|
Versions of Xen and NetBSD |
[[!toc]] |
========================== |
|
|
|
Most of the installation concepts and instructions are independent of |
# Versions and Support |
Xen version. This section gives advice on which version to choose. |
|
Versions not in pkgsrc and older unsupported versions of NetBSD are |
|
intentionally ignored. |
|
|
|
Xen |
In NetBSD, Xen is provided in pkgsrc, via matching pairs of packages |
--- |
|
|
|
In NetBSD, xen is provided in pkgsrc, via matching pairs of packages |
|
xenkernel and xentools. We will refer only to the kernel versions, |
xenkernel and xentools. We will refer only to the kernel versions, |
but note that both packages must be installed together and must have |
but note that both packages must be installed together and must have |
matching versions. |
matching versions. |
|
|
xenkernel3 and xenkernel33 provide Xen 3.1 and 3.3. These no longer |
Versions available in pkgsrc: |
receive security patches and should not be used. Xen 3.1 supports PCI |
|
passthrough. |
|
|
|
xenkernel41 provides Xen 4.1. This is no longer maintained by Xen, |
|
but as of 2014-12 receives backported security patches. It is a |
|
reasonable although trailing-edge choice. |
|
|
|
xenkernel42 provides Xen 4.2. This is maintained by Xen, but old as |
|
of 2014-12. |
|
|
|
Ideally newer versions of Xen will be added to pkgsrc. |
[[!table data=""" |
|
Xen Version |Package Name |Xen CPU Support |xm? |EOL'ed By Upstream |
|
4.11 |xenkernel411 |x86_64 | |No |
|
4.13 |xenkernel413 |x86_64 | |No |
|
"""]] |
|
|
Note that NetBSD support is called XEN3; it works with 3.1 through |
See also the [Xen Security Advisory page](http://xenbits.xen.org/xsa/). |
4.2, because the hypercall interface has been stable. |
|
|
|
Xen command program |
Multiprocessor (SMP) support in NetBSD differs depending on the domain: |
------------------- |
|
|
|
Early Xen used a program called "xm" to manipulate the system from the |
[[!table data=""" |
dom0. Starting in 4.1, a replacement program with similar behavior |
Domain |Supports SMP |
called "xl" is provided. In 4.2, "xm" is no longer available. |
dom0 |No |
|
domU |Yes |
|
"""]] |
|
|
NetBSD |
Note: NetBSD support is called XEN3. However, it does support Xen 4, |
------ |
because the hypercall interface has remained identical. |
|
|
The netbsd-5, netbsd-6, netbsd-7, and -current branches are all |
Older Xen had a python-based management tool called xm, now replaced |
reasonable choices, with more or less the same considerations for |
by xl. |
non-Xen use. Therefore, netbsd-6 is recommended as the stable version |
|
of the most recent release. |
|
|
|
As of NetBSD 6, a NetBSD domU will support multiple vcpus. There is |
|
no SMP support for NetBSD as dom0. (The dom0 itself doesn't really |
|
need SMP; the lack of support is really a problem when using a dom0 as |
|
a normal computer.) |
|
|
|
Architecture |
Architecture |
------------ |
------------ |
|
|
Xen is basically amd64 only at this point. One can either run i386 |
Xen 4.5 and later runs on x86_64 hardware (the NetBSD amd64 port). |
domains or amd64 domains. If running i386, PAE versions are required, |
There is a concept of Xen running on ARM, but there are no reports of this working with NetBSD. |
for both dom0 and domU. These versions are built by default in NetBSD |
|
releases. While i386 dom0 works fine, amd64 is recommended as more |
|
normal. (Note that emacs (at least) fails if run on i386 with PAE when |
|
built without, and vice versa, presumably due to bugs in the undump |
|
code.) |
|
|
|
Recommendation |
The dom0 system should be amd64. (Instructions for i386PAE dom0 have been removed from the HOWTO.) |
-------------- |
|
|
|
Therefore, this HOWTO recommends running xenkernel42 (and xentools42), |
The domU can be i386PAE or amd64. |
xl, the NetBSD 6 stable branch, and to use amd64 as the dom0. Either |
i386PAE at one point was considered as [faster](https://lists.xen.org/archives/html/xen-devel/2012-07/msg00085.html) than amd64. |
the i386 or amd64 of NetBSD may be used as domUs. |
|
|
|
NetBSD as a dom0 |
|
================ |
|
|
|
NetBSD can be used as a dom0 and works very well. The following |
|
sections address installation, updating NetBSD, and updating Xen. |
|
Note that it doesn't make sense to talk about installing a dom0 OS |
|
without also installing Xen itself. We first address installing |
|
NetBSD, which is not yet a dom0, and then adding Xen, pivoting the |
|
NetBSD install to a dom0 install by just changing the kernel and boot |
|
configuration. |
|
|
|
Styles of dom0 operation |
|
------------------------ |
|
|
|
There are two basic ways to use Xen. The traditional method is for |
|
the dom0 to do absolutely nothing other than providing support to some |
|
number of domUs. Such a system was probably installed for the sole |
|
purpose of hosting domUs, and sits in a server room on a UPS. |
|
|
|
The other way is to put Xen under a normal-usage computer, so that the |
|
dom0 is what the computer would have been without Xen, perhaps a |
|
desktop or laptop. Then, one can run domUs at will. Purists will |
|
deride this as less secure than the previous approach, and for a |
|
computer whose purpose is to run domUs, they are right. But Xen and a |
|
dom0 (without domUs) is not meaingfully less secure than the same |
|
things running without Xen. One can boot Xen or boot regular NetBSD |
|
alternately with little problems, simply refraining from starting the |
|
Xen daemons when not running Xen. |
|
|
|
Note that NetBSD as dom0 does not support multiple CPUs. This will |
# Creating a dom0 |
limit the performance of the Xen/dom0 workstation approach. |
|
|
In order to install a NetBSD as a dom0, one must first install a normal |
|
NetBSD system, and then pivot the install to a dom0 install by changing |
|
the kernel and boot configuration. |
|
|
|
In 2018-05, trouble booting a dom0 was reported with 256M of RAM: with |
|
512M it worked reliably. This does not make sense, but if you see |
|
"not ELF" after Xen boots, try increasing dom0 RAM. |
|
|
Installation of NetBSD |
Installation of NetBSD |
---------------------- |
---------------------- |
|
|
First, |
[Install NetBSD/amd64](/guide/inst/) |
[install NetBSD/amd64](../../docs/guide/en/chap-inst.html) |
|
just as you would if you were not using Xen. |
just as you would if you were not using Xen. |
However, the partitioning approach is very important. |
|
|
|
If you want to use RAIDframe for the dom0, there are no special issues |
|
for Xen. Typically one provides RAID storage for the dom0, and the |
|
domU systems are unaware of RAID. The 2nd-stage loader bootxx_* skips |
|
over a RAID1 header to find /boot from a filesystem within a RAID |
|
partition; this is no different when booting Xen. |
|
|
|
There are 4 styles of providing backing storage for the virtual disks |
|
used by domUs: raw partitions, LVM, file-backed vnd(4), and SAN, |
|
|
|
With raw partitions, one has a disklabel (or gpt) partition sized for |
|
each virtual disk to be used by the domU. (If you are able to predict |
|
how domU usage will evolve, please add an explanation to the HOWTO. |
|
Seriously, needs tend to change over time.) |
|
|
|
One can use lvm(8) to create logical devices to use for domU disks. |
|
This is almost as efficient sa raw disk partitions and more flexible. |
|
Hence raw disk partitions should typically not be used. |
|
|
|
One can use files in the dom0 filesystem, typically created by dd'ing |
|
/dev/zero to create a specific size. This is somewhat less efficient, |
|
but very convenient, as one can cp the files for backup, or move them |
|
between dom0 hosts. |
|
|
|
Finally, in theory one can place the files backing the domU disks in a |
|
SAN. (This is an invitation for someone who has done this to add a |
|
HOWTO page.) |
|
|
|
Installation of Xen |
Installation of Xen |
------------------- |
------------------- |
|
|
In the dom0, install sysutils/xenkernel42 and sysutils/xentools42 from |
We will consider that you chose to use Xen 4.13, with NetBSD/amd64 as |
pkgsrc (or another matching pair). |
dom0. In the dom0, install xenkernel48 and xentools48 from pkgsrc. |
See [the pkgsrc |
|
documentation](http://www.NetBSD.org/docs/pkgsrc/) for help with pkgsrc. |
Once this is done, install the Xen kernel itself: |
|
|
For Xen 3.1, support for HVM guests is in sysutils/xentool3-hvm. More |
[[!template id=programlisting text=""" |
recent versions have HVM support integrated in the main xentools |
# cp /usr/pkg/xen48-kernel/xen.gz / |
package. It is entirely reasonable to run only PV guests. |
"""]] |
|
|
Next you need to install the selected Xen kernel itself, which is |
Then, place a NetBSD XEN3_DOM0 kernel in the `/` directory. Such kernel |
installed by pkgsrc as "/usr/pkg/xen*-kernel/xen.gz". Copy it to /. |
can either be compiled manually, or downloaded from the NetBSD FTP, for |
For debugging, one may copy xen-debug.gz; this is conceptually similar |
example at: |
to DIAGNOSTIC and DEBUG in NetBSD. xen-debug.gz is basically only |
|
useful with a serial console. Then, place a NetBSD XEN3_DOM0 kernel |
[[!template id=programlisting text=""" |
in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz |
ftp.netbsd.org/pub/NetBSD/NetBSD-8.0/amd64/binary/kernel/netbsd-XEN3_DOM0.gz |
of a NetBSD build. Both xen and NetBSD may be left compressed. (If |
"""]] |
using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.) |
|
|
Add a line to /boot.cfg to boot Xen: |
In a dom0 kernel, kernfs is mandatory for xend to comunicate with the |
|
kernel, so ensure that /kern is in fstab. |
[[!template id=filecontent name="/boot.cfg" text=""" |
|
menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=512M |
Because you already installed NetBSD, you have a working boot setup |
"""]] |
with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the |
|
beginning of your root filesystem, /boot present, and likely |
This specifies that the dom0 should have 512MB of ram, leaving the rest |
/boot.cfg. (If not, fix before continuing!) |
to be allocated for domUs. To use a serial console, use: |
|
|
See boot.cfg(5) for an example. The basic line is |
[[!template id=filecontent name="/boot.cfg" text=""" |
|
menu=Xen:load /netbsd-XEN3_DOM0.gz;multiboot /xen.gz dom0_mem=512M console=com1 com1=9600,8n1 |
"menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M" |
"""]] |
|
|
which specifies that the dom0 should have 256M, leaving the rest to be |
which will use the first serial port for Xen (which counts starting |
allocated for domUs. |
from 1, unlike NetBSD which counts starting from 0), forcing |
|
speed/parity. Because the NetBSD command line lacks a |
As with non-Xen systems, you should have a line to boot /netbsd (a |
"console=pc" argument, it will use the default "xencons" console device, |
kernel that works without Xen) and fallback versions of the non-Xen |
which directs the console I/O through Xen to the same console device Xen |
kernel, Xen, and the dom0 kernel. |
itself uses (in this case, the serial port). |
|
|
Configuring Xen |
In an attempt to add performance, one can also add `dom0_max_vcpus=1 dom0_vcpus_pin`, |
--------------- |
to force only one vcpu to be provided (since NetBSD dom0 can't use |
|
more) and to pin that vcpu to a physical CPU. Xen has |
Now, you have a system that will boot Xen and the dom0 kernel, and |
[many boot options](http://xenbits.xenproject.org/docs/4.13-testing/misc/xen-command-line.html), |
just run the dom0 kernel. There will be no domUs, and none can be |
and other than dom0 memory and max_vcpus, they are generally not |
started because you still have to configure the dom0 tools. |
necessary. |
|
|
For 3.3 (and probably 3.1), add to rc.conf (but note that you should |
Copy the boot scripts into `/etc/rc.d`: |
have installed 4.2): |
|
xend=YES |
[[!template id=programlisting text=""" |
xenbackendd=YES |
# cp /usr/pkg/share/examples/rc.d/xen* /etc/rc.d/ |
|
"""]] |
For 4.1 and 4.2, add to rc.conf: |
|
xend=YES |
Enable `xencommons`: |
xencommons=YES |
|
|
[[!template id=filecontent name="/etc/rc.conf" text=""" |
|
xencommons=YES |
|
"""]] |
|
|
|
Now, reboot so that you are running a DOM0 kernel under Xen, rather |
|
than GENERIC without Xen. |
|
|
|
TODO: Recommend for/against xen-watchdog. |
|
|
|
Once the reboot is done, use `xl` to inspect Xen's boot messages, |
|
available resources, and running domains. For example: |
|
|
|
[[!template id=programlisting text=""" |
|
# xl dmesg |
|
... xen's boot info ... |
|
# xl info |
|
... available memory, etc ... |
|
# xl list |
|
Name Id Mem(MB) CPU State Time(s) Console |
|
Domain-0 0 64 0 r---- 58.1 |
|
"""]] |
|
|
|
Xen logs will be in /var/log/xen. |
|
|
|
### Issues with xencommons |
|
|
|
`xencommons` starts `xenstored`, which stores data on behalf of dom0 and |
|
domUs. It does not currently work to stop and start xenstored. |
|
Certainly all domUs should be shutdown first, following the sort order |
|
of the rc.d scripts. However, the dom0 sets up state with xenstored, |
|
and is not notified when xenstored exits, leading to not recreating |
|
the state when the new xenstored starts. Until there's a mechanism to |
|
make this work, one should not expect to be able to restart xenstored |
|
(and thus xencommons). There is currently no reason to expect that |
|
this will get fixed any time soon. |
|
|
|
anita (for testing NetBSD) |
|
-------------------------- |
|
|
|
With the setup so far, one should be able to run |
|
anita (see pkgsrc/misc/py-anita) to test NetBSD releases, by doing (as |
|
root, because anita must create a domU): |
|
|
|
[[!template id=programlisting text=""" |
|
anita --vmm=xl test file:///usr/obj/i386/ |
|
"""]] |
|
|
|
Xen-specific NetBSD issues |
|
-------------------------- |
|
|
|
There are (at least) two additional things different about NetBSD as a |
|
dom0 kernel compared to hardware. |
|
|
|
One is that the module ABI is different because some of the #defines |
|
change, so one must build modules for Xen. As of netbsd-7, the build |
|
system does this automatically. |
|
|
|
The other difference is that XEN3_DOM0 does not have exactly the same |
|
options as GENERIC. While it is debatable whether or not this is a |
|
bug, users should be aware of this and can simply add missing config |
|
items if desired. |
|
|
Updating NetBSD in a dom0 |
Updating NetBSD in a dom0 |
------------------------- |
------------------------- |
Line 271 Updating NetBSD in a dom0
|
Line 229 Updating NetBSD in a dom0
|
This is just like updating NetBSD on bare hardware, assuming the new |
This is just like updating NetBSD on bare hardware, assuming the new |
version supports the version of Xen you are running. Generally, one |
version supports the version of Xen you are running. Generally, one |
replaces the kernel and reboots, and then overlays userland binaries |
replaces the kernel and reboots, and then overlays userland binaries |
and adjusts /etc. |
and adjusts `/etc`. |
|
|
Note that one must update both the non-Xen kernel typically used for |
Note that one must update both the non-Xen kernel typically used for |
rescue purposes and the DOM0 kernel used with Xen. |
rescue purposes and the DOM0 kernel used with Xen. |
|
|
To convert from grub to /boot, install an mbr bootblock with fdisk, |
Converting from grub to /boot |
bootxx_ with installboot, /boot and /boot.cfg. This really should be |
----------------------------- |
no different than completely reinstalling boot blocks on a non-Xen |
|
system. |
|
|
|
Updating Xen versions |
These instructions were used to convert a system from |
|
grub to /boot. The system was originally installed in February of |
|
2006 with a RAID1 setup and grub to boot Xen 2, and has been updated |
|
over time. Before these commands, it was running NetBSD 6 i386, Xen |
|
4.1 and grub, much like the message linked earlier in the grub |
|
section. |
|
|
|
[[!template id=programlisting text=""" |
|
# Install MBR bootblocks on both disks. |
|
fdisk -i /dev/rwd0d |
|
fdisk -i /dev/rwd1d |
|
# Install NetBSD primary boot loader (/ is FFSv1) into RAID1 components. |
|
installboot -v /dev/rwd0d /usr/mdec/bootxx_ffsv1 |
|
installboot -v /dev/rwd1d /usr/mdec/bootxx_ffsv1 |
|
# Install secondary boot loader |
|
cp -p /usr/mdec/boot / |
|
# Create boot.cfg following earlier guidance: |
|
menu=Xen:load /netbsd-XEN3PAE_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=512M |
|
menu=Xen.ok:load /netbsd-XEN3PAE_DOM0.ok.gz console=pc;multiboot /xen.ok.gz dom0_mem=512M |
|
menu=GENERIC:boot |
|
menu=GENERIC single-user:boot -s |
|
menu=GENERIC.ok:boot netbsd.ok |
|
menu=GENERIC.ok single-user:boot netbsd.ok -s |
|
menu=Drop to boot prompt:prompt |
|
default=1 |
|
timeout=30 |
|
"""]] |
|
|
|
Upgrading Xen versions |
--------------------- |
--------------------- |
|
|
Updating Xen is conceptually not difficult, but can run into all the |
Minor version upgrades are trivial. Just rebuild/replace the |
issues found when installing Xen. Assuming migration from 4.1 to 4.2, |
xenkernel version and copy the new xen.gz to `/` (where `/boot.cfg` |
remove the xenkernel41 and xentools41 packages and install the |
references it), and reboot. |
xenkernel42 and xentools42 packages. Copy the 4.2 xen.gz to /. |
|
|
#Unprivileged domains (domU) |
Ensure that the contents of /etc/rc.d/xen* are correct. Enable the |
|
correct set of daemons. Ensure that the domU config files are valid |
This section describes general concepts about domUs. It does not |
for the new version. |
address specific domU operating systems or how to install them. The |
|
config files for domUs are typically in `/usr/pkg/etc/xen`, and are |
|
typically named so that the file name, domU name and the domU's host |
|
name match. |
|
|
|
The domU is provided with CPU and memory by Xen, configured by the |
|
dom0. The domU is provided with disk and network by the dom0, |
|
mediated by Xen, and configured in the dom0. |
|
|
|
Entropy in domUs can be an issue; physical disks and network are on |
|
the dom0. NetBSD's /dev/random system works, but is often challenged. |
|
|
|
Config files |
|
------------ |
|
|
|
See /usr/pkg/share/examples/xen/xlexample* |
|
for a small number of well-commented examples, mostly for running |
|
GNU/Linux. |
|
|
|
The following is an example minimal domain configuration file. The domU |
|
serves as a network file server. |
|
|
|
[[!template id=filecontent name="/usr/pkg/etc/xen/foo" text=""" |
|
name = "domU-id" |
|
kernel = "/netbsd-XEN3PAE_DOMU-i386-foo.gz" |
|
memory = 1024 |
|
vif = [ 'mac=aa:00:00:d1:00:09,bridge=bridge0' ] |
|
disk = [ 'file:/n0/xen/foo-wd0,0x0,w', |
|
'file:/n0/xen/foo-wd1,0x1,w' ] |
|
"""]] |
|
|
|
The domain will have name given in the `name` setting. The kernel has the |
|
host/domU name in it, so that on the dom0 one can update the various |
|
domUs independently. The `vif` line causes an interface to be provided, |
|
with a specific mac address (do not reuse MAC addresses!), in bridge |
|
mode. Two disks are provided, and they are both writable; the bits |
|
are stored in files and Xen attaches them to a vnd(4) device in the |
|
dom0 on domain creation. The system treats xbd0 as the boot device |
|
without needing explicit configuration. |
|
|
|
By convention, domain config files are kept in `/usr/pkg/etc/xen`. Note |
|
that "xl create" takes the name of a config file, while other commands |
|
take the name of a domain. |
|
|
|
Examples of commands: |
|
|
|
[[!template id=programlisting text=""" |
|
xl create /usr/pkg/etc/xen/foo |
|
xl console domU-id |
|
xl create -c /usr/pkg/etc/xen/foo |
|
xl shutdown domU-id |
|
xl list |
|
"""]] |
|
|
|
Typing `^]` will exit the console session. Shutting down a domain is |
|
equivalent to pushing the power button; a NetBSD domU will receive a |
|
power-press event and do a clean shutdown. Shutting down the dom0 |
|
will trigger controlled shutdowns of all configured domUs. |
|
|
|
CPU and memory |
|
-------------- |
|
|
|
A domain is provided with some number of vcpus, up to the number |
|
of CPUs seen by the hypervisor. For a domU, it is controlled |
|
from the config file by the "vcpus = N" directive. |
|
|
|
A domain is provided with memory; this is controlled in the config |
|
file by "memory = N" (in megabytes). In the straightforward case, the |
|
sum of the the memory allocated to the dom0 and all domUs must be less |
|
than the available memory. |
|
|
|
Xen also provides a "balloon" driver, which can be used to let domains |
|
use more memory temporarily. |
|
|
|
Virtual disks |
|
------------- |
|
|
|
In domU config files, the disks are defined as a sequence of 3-tuples: |
|
|
|
* The first element is "method:/path/to/disk". Common methods are |
|
"file:" for a file-backed vnd, and "phy:" for something that is already |
|
a device, such as an LVM logical volume. |
|
|
|
* The second element is an artifact of how virtual disks are passed to |
|
Linux, and a source of confusion with NetBSD Xen usage. Linux domUs |
|
are given a device name to associate with the disk, and values like |
|
"hda1" or "sda1" are common. In a NetBSD domU, the first disk appears |
|
as xbd0, the second as xbd1, and so on. However, xl demands a |
|
second argument. The name given is converted to a major/minor by |
|
calling stat(2) on the name in /dev and this is passed to the domU. |
|
In the general case, the dom0 and domU can be different operating |
|
systems, and it is an unwarranted assumption that they have consistent |
|
numbering in /dev, or even that the dom0 OS has a /dev. With NetBSD |
|
as both dom0 and domU, using values of 0x0 for the first disk and 0x1 |
|
for the second works fine and avoids this issue. For a GNU/Linux |
|
guest, one can create /dev/hda1 in /dev, or to pass 0x301 for |
|
/dev/hda1. |
|
|
|
* The third element is "w" for writable disks, and "r" for read-only |
|
disks. |
|
|
|
Example: |
|
[[!template id=filecontent name="/usr/pkg/etc/xen/foo" text=""" |
|
disk = [ 'file:/n0/xen/foo-wd0,0x0,w' ] |
|
"""]] |
|
|
|
Note that NetBSD by default creates only vnd[0123]. If you need more |
|
than 4 total virtual disks at a time, run e.g. "./MAKEDEV vnd4" in the |
|
dom0. |
|
|
|
Note that NetBSD by default creates only xbd[0123]. If you need more |
|
virtual disks in a domU, run e.g. "./MAKEDEV xbd4" in the domU. |
|
|
|
Virtual Networking |
|
------------------ |
|
|
|
Xen provides virtual Ethernets, each of which connects the dom0 and a |
|
domU. For each virtual network, there is an interface "xvifN.M" in |
|
the dom0, and a matching interface xennetM (NetBSD name) in domU index N. |
|
The interfaces behave as if there is an Ethernet with two |
|
adapters connected. From this primitive, one can construct various |
|
configurations. We focus on two common and useful cases for which |
|
there are existing scripts: bridging and NAT. |
|
|
|
With bridging (in the example above), the domU perceives itself to be |
|
on the same network as the dom0. For server virtualization, this is |
|
usually best. Bridging is accomplished by creating a bridge(4) device |
|
and adding the dom0's physical interface and the various xvifN.0 |
|
interfaces to the bridge. One specifies "bridge=bridge0" in the domU |
|
config file. The bridge must be set up already in the dom0; an |
|
example /etc/ifconfig.bridge0 is: |
|
|
|
[[!template id=filecontent name="/etc/ifconfig.bridge0" text=""" |
|
create |
|
up |
|
!brconfig bridge0 add wm0 |
|
"""]] |
|
|
|
With NAT, the domU perceives itself to be behind a NAT running on the |
|
dom0. This is often appropriate when running Xen on a workstation. |
|
TODO: NAT appears to be configured by "vif = [ '' ]". |
|
|
|
The MAC address specified is the one used for the interface in the new |
|
domain. The interface in dom0 will use this address XOR'd with |
|
00:00:00:01:00:00. Random MAC addresses are assigned if not given. |
|
|
|
Starting domains automatically |
|
------------------------------ |
|
|
|
To start domains `domU-netbsd` and `domU-linux` at boot and shut them |
|
down cleanly on dom0 shutdown, add the following in rc.conf: |
|
|
|
[[!template id=filecontent name="/etc/rc.conf" text=""" |
|
xendomains="domU-netbsd domU-linux" |
|
"""]] |
|
|
Creating unprivileged domains (domU) |
# Creating a domU |
==================================== |
|
|
|
Creating domUs is almost entirely independent of operating system. We |
Creating domUs is almost entirely independent of operating system. We |
first explain NetBSD, and then differences for Linux and Solaris. |
have already presented the basics of config files. Note that you must |
|
have already completed the dom0 setup so that "xl list" works. |
|
|
Creating an unprivileged NetBSD domain (domU) |
Creating a NetBSD PV domU |
--------------------------------------------- |
-------------------------- |
|
|
Once you have *domain0* running, you need to start the xen tool daemon |
See the earlier config file, and adjust memory. Decide on how much |
(`/usr/pkg/share/examples/rc.d/xend start`) and the xen backend daemon |
storage you will provide, and prepare it (file or LVM). |
(`/usr/pkg/share/examples/rc.d/xenbackendd start` for Xen3\*, |
|
`/usr/pkg/share/examples/rc.d/xencommons start` for Xen4.\*). Make sure |
|
that `/dev/xencons` and `/dev/xenevt` exist before starting `xend`. You |
|
can create them with this command: |
|
|
|
# cd /dev && sh MAKEDEV xen |
|
|
|
xend will write logs to `/var/log/xend.log` and |
|
`/var/log/xend-debug.log`. You can then control xen with the xm tool. |
|
'xm list' will show something like: |
|
|
|
# xm list |
|
Name Id Mem(MB) CPU State Time(s) Console |
|
Domain-0 0 64 0 r---- 58.1 |
|
|
|
'xm create' allows you to create a new domain. It uses a config file in |
|
PKG\_SYSCONFDIR for its parameters. By default, this file will be in |
|
`/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which |
|
will be executed in the new domain (this kernel is in the *domain0* file |
|
system, not on the new domain virtual disk; but please note, you should |
|
install the same kernel into *domainU* as `/netbsd` in order to make |
|
your system tools, like MAN.SAVECORE.8, work). A suitable kernel is |
|
provided as part of the i386 and amd64 binary sets: XEN3\_DOMU. |
|
|
|
Here is an /usr/pkg/etc/xen/nbsd example config file: |
|
|
|
# -*- mode: python; -*- |
|
#============================================================================ |
|
# Python defaults setup for 'xm create'. |
|
# Edit this file to reflect the configuration of your system. |
|
#============================================================================ |
|
|
|
#---------------------------------------------------------------------------- |
|
# Kernel image file. This kernel will be loaded in the new domain. |
|
kernel = "/home/bouyer/netbsd-XEN3_DOMU" |
|
#kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU" |
|
|
|
# Memory allocation (in megabytes) for the new domain. |
|
memory = 128 |
|
|
|
# A handy name for your new domain. This will appear in 'xm list', |
|
# and you can use this as parameters for xm in place of the domain |
|
# number. All domains must have different names. |
|
# |
|
name = "nbsd" |
|
|
|
# The number of virtual CPUs this domain has. |
|
# |
|
vcpus = 1 |
|
|
|
#---------------------------------------------------------------------------- |
|
# Define network interfaces for the new domain. |
|
|
|
# Number of network interfaces (must be at least 1). Default is 1. |
|
nics = 1 |
|
|
|
# Define MAC and/or bridge for the network interfaces. |
|
# |
|
# The MAC address specified in ``mac'' is the one used for the interface |
|
# in the new domain. The interface in domain0 will use this address XOR'd |
|
# with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random |
|
# MACs are assigned if not given. |
|
# |
|
# ``bridge'' is a required parameter, which will be passed to the |
|
# vif-script called by xend(8) when a new domain is created to configure |
|
# the new xvif interface in domain0. |
|
# |
|
# In this example, the xvif is added to bridge0, which should have been |
|
# set up prior to the new domain being created -- either in the |
|
# ``network'' script or using a /etc/ifconfig.bridge0 file. |
|
# |
|
vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ] |
|
|
|
#---------------------------------------------------------------------------- |
|
# Define the disk devices you want the domain to have access to, and |
|
# what you want them accessible as. |
|
# |
|
# Each disk entry is of the form: |
|
# |
|
# phy:DEV,VDEV,MODE |
|
# |
|
# where DEV is the device, VDEV is the device name the domain will see, |
|
# and MODE is r for read-only, w for read-write. You can also create |
|
# file-backed domains using disk entries of the form: |
|
# |
|
# file:PATH,VDEV,MODE |
|
# |
|
# where PATH is the path to the file used as the virtual disk, and VDEV |
|
# and MODE have the same meaning as for ``phy'' devices. |
|
# |
|
# VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index), |
|
# but it does for Linux. |
|
# Worse, the device has to exist in /dev/ of domain0, because xm will |
|
# try to stat() it. This means that in order to load a Linux guest OS |
|
# from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ... |
|
# on domain0, with the major/minor from Linux :( |
|
# Alternatively it's possible to specify the device number in hex, |
|
# e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ... |
|
|
|
disk = [ 'phy:/dev/wd0e,0x1,w' ] |
While the kernel will be obtained from the dom0 file system, the same |
#disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ] |
file should be present in the domU as /netbsd so that tools like |
#disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ] |
savecore(8) can work. (This is helpful but not necessary.) |
|
|
|
The kernel must be specifically for Xen and for use as a domU. The |
|
i386 and amd64 provide the following kernels: |
|
|
|
i386 XEN3PAE_DOMU |
|
amd64 XEN3_DOMU |
|
|
|
This will boot NetBSD, but this is not that useful if the disk is |
|
empty. One approach is to unpack sets onto the disk outside of xen |
|
(by mounting it, just as you would prepare a physical disk for a |
|
system you can't run the installer on). |
|
|
#---------------------------------------------------------------------------- |
A second approach is to run an INSTALL kernel, which has a miniroot |
# Set the kernel command line for the new domain. |
and can load sets from the network. To do this, copy the INSTALL |
|
kernel to / and change the kernel line in the config file to: |
|
|
# Set root device. This one does matter for NetBSD |
kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU" |
root = "xbd0" |
|
# extra parameters passed to the kernel |
|
# this is where you can set boot flags like -s, -a, etc ... |
|
#extra = "" |
|
|
|
#---------------------------------------------------------------------------- |
|
# Set according to whether you want the domain restarted when it exits. |
|
# The default is False. |
|
#autorestart = True |
|
|
|
# end of nbsd config file ==================================================== |
|
|
|
When a new domain is created, xen calls the |
|
`/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface |
|
created in *domain0*. This can be used to automatically configure the |
|
xvif?.? interfaces in *domain0*. In our example, these will be bridged |
|
with the bridge0 device in *domain0*, but the bridge has to exist first. |
|
To do this, create the file `/etc/ifconfig.bridge0` and make it look |
|
like this: |
|
|
|
create |
|
!brconfig $int add ex0 up |
|
|
|
(replace `ex0` with the name of your physical interface). Then bridge0 |
|
will be created on boot. See the MAN.BRIDGE.4 man page for details. |
|
|
|
So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a |
|
working vif-bridge is also provided with xentools20) configuring: |
|
|
|
#!/bin/sh |
|
#============================================================================ |
|
# $NetBSD: howto.mdwn,v 1.24 2014/12/24 01:35:40 gdt Exp $ |
|
# |
|
# /usr/pkg/etc/xen/vif-bridge |
|
# |
|
# Script for configuring a vif in bridged mode with a dom0 interface. |
|
# The xend(8) daemon calls a vif script when bringing a vif up or down. |
|
# The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp |
|
# in the ``vif-script'' field. |
|
# |
|
# Usage: vif-bridge up|down [var=value ...] |
|
# |
|
# Actions: |
|
# up Adds the vif interface to the bridge. |
|
# down Removes the vif interface from the bridge. |
|
# |
|
# Variables: |
|
# domain name of the domain the interface is on (required). |
|
# vifq vif interface name (required). |
|
# mac vif MAC address (required). |
|
# bridge bridge to add the vif to (required). |
|
# |
|
# Example invocation: |
|
# |
|
# vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0 |
|
# |
|
#============================================================================ |
|
|
|
# Exit if anything goes wrong |
|
set -e |
|
|
|
echo "vif-bridge $*" |
|
|
|
# Operation name. |
|
OP=$1; shift |
|
|
|
# Pull variables in args into environment |
|
for arg ; do export "${arg}" ; done |
|
|
|
# Required parameters. Fail if not set. |
|
domain=${domain:?} |
|
vif=${vif:?} |
|
mac=${mac:?} |
|
bridge=${bridge:?} |
|
|
|
# Optional parameters. Set defaults. |
|
ip=${ip:-''} # default to null (do nothing) |
|
|
|
# Are we going up or down? |
|
case $OP in |
|
up) brcmd='add' ;; |
|
down) brcmd='delete' ;; |
|
*) |
|
echo 'Invalid command: ' $OP |
|
echo 'Valid commands are: up, down' |
|
exit 1 |
|
;; |
|
esac |
|
|
|
# Don't do anything if the bridge is "null". |
|
if [ "${bridge}" = "null" ] ; then |
|
exit |
|
fi |
|
|
|
# Don't do anything if the bridge doesn't exist. |
|
if ! ifconfig -l | grep "${bridge}" >/dev/null; then |
|
exit |
|
fi |
|
|
|
# Add/remove vif to/from bridge. |
|
ifconfig x${vif} $OP |
|
brconfig ${bridge} ${brcmd} x${vif} |
|
|
|
Now, running |
|
|
|
xm create -c /usr/pkg/etc/xen/nbsd |
|
|
|
should create a domain and load a NetBSD kernel in it. (Note: `-c` |
|
causes xm to connect to the domain's console once created.) The kernel |
|
will try to find its root file system on xbd0 (i.e., wd0e) which hasn't |
|
been created yet. wd0e will be seen as a disk device in the new domain, |
|
so it will be 'sub-partitioned'. We could attach a ccd to wd0e in |
|
*domain0* and partition it, newfs and extract the NetBSD/i386 or amd64 |
|
tarballs there, but there's an easier way: load the |
|
`netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets. |
|
Like other install kernels, it contains a ramdisk with sysinst, so you |
|
can install NetBSD using sysinst on your new domain. |
|
|
|
If you want to install NetBSD/Xen with a CDROM image, the following line |
Then, start the domain as "xl create -c configfile". |
should be used in the `/usr/pkg/etc/xen/nbsd` file: |
|
|
Alternatively, if you want to install NetBSD/Xen with a CDROM image, the following |
|
line should be used in the config file. |
|
|
disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ] |
disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ] |
|
|
After booting the domain, the option to install via CDROM may be |
After booting the domain, the option to install via CDROM may be |
selected. The CDROM device should be changed to `xbd1d`. |
selected. The CDROM device should be changed to `xbd1d`. |
|
|
Once done installing, `halt -p` the new domain (don't reboot or halt, it |
Once done installing, "halt -p" the new domain (don't reboot or halt, |
would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the |
it would reload the INSTALL_XEN3_DOMU kernel even if you changed the |
config file), switch the config file back to the XEN3\_DOMU kernel, and |
config file), switch the config file back to the XEN3_DOMU kernel, |
start the new domain again. Now it should be able to use `root on xbd0a` |
and start the new domain again. Now it should be able to use "root on |
and you should have a second, functional NetBSD system on your xen |
xbd0a" and you should have a, functional NetBSD domU. |
installation. |
|
|
|
|
TODO: check if this is still accurate. |
When the new domain is booting you'll see some warnings about *wscons* |
When the new domain is booting you'll see some warnings about *wscons* |
and the pseudo-terminals. These can be fixed by editing the files |
and the pseudo-terminals. These can be fixed by editing the files |
`/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in |
`/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in |
Line 559 Finally, all screens must be commented o
|
Line 496 Finally, all screens must be commented o
|
|
|
It is also desirable to add |
It is also desirable to add |
|
|
powerd=YES |
powerd=YES |
|
|
in rc.conf. This way, the domain will be properly shut down if |
in rc.conf. This way, the domain will be properly shut down if |
`xm shutdown -R` or `xm shutdown -H` is used on the domain0. |
`xm shutdown -R` or `xm shutdown -H` is used on the dom0. |
|
|
Your domain should be now ready to work, enjoy. |
It is not strictly necessary to have a kernel (as /netbsd) in the domU |
|
file system. However, various programs (e.g. netstat) will use that |
|
kernel to look up symbols to read from kernel virtual memory. If |
|
/netbsd is not the running kernel, those lookups will fail. (This is |
|
not really a Xen-specific issue, but because the domU kernel is |
|
obtained from the dom0, it is far more likely to be out of sync or |
|
missing with Xen.) |
|
|
Creating an unprivileged Linux domain (domU) |
Creating a Linux domU |
-------------------------------------------- |
--------------------- |
|
|
Creating unprivileged Linux domains isn't much different from |
Creating unprivileged Linux domains isn't much different from |
unprivileged NetBSD domains, but there are some details to know. |
unprivileged NetBSD domains, but there are some details to know. |
Line 578 the example below)
|
Line 521 the example below)
|
disk = [ 'phy:/dev/wd0e,0x1,w' ] |
disk = [ 'phy:/dev/wd0e,0x1,w' ] |
|
|
does matter to Linux. It wants a Linux device number here (e.g. 0x300 |
does matter to Linux. It wants a Linux device number here (e.g. 0x300 |
for hda). Linux builds device numbers as: (major \<\< 8 + minor). So, |
for hda). Linux builds device numbers as: (major \<\< 8 + minor). |
hda1 which has major 3 and minor 1 on a Linux system will have device |
So, hda1 which has major 3 and minor 1 on a Linux system will have |
number 0x301. Alternatively, devices names can be used (hda, hdb, ...) |
device number 0x301. Alternatively, devices names can be used (hda, |
as xentools has a table to map these names to devices numbers. To export |
hdb, ...) as xentools has a table to map these names to devices |
a partition to a Linux guest we can use: |
numbers. To export a partition to a Linux guest we can use: |
|
|
disk = [ 'phy:/dev/wd0e,0x300,w' ] |
disk = [ 'phy:/dev/wd0e,0x300,w' ] |
root = "/dev/hda1 ro" |
root = "/dev/hda1 ro" |
|
|
and it will appear as /dev/hda on the Linux system, and be used as root |
and it will appear as /dev/hda on the Linux system, and be used as root |
partition. |
partition. |
|
|
To install the Linux system on the partition to be exported to the guest |
To install the Linux system on the partition to be exported to the |
domain, the following method can be used: install sysutils/e2fsprogs |
guest domain, the following method can be used: install |
from pkgsrc. Use mke2fs to format the partition that will be the root |
sysutils/e2fsprogs from pkgsrc. Use mke2fs to format the partition |
partition of your Linux domain, and mount it. Then copy the files from a |
that will be the root partition of your Linux domain, and mount it. |
working Linux system, make adjustments in `/etc` (fstab, network |
Then copy the files from a working Linux system, make adjustments in |
config). It should also be possible to extract binary packages such as |
`/etc` (fstab, network config). It should also be possible to extract |
.rpm or .deb directly to the mounted partition using the appropriate |
binary packages such as .rpm or .deb directly to the mounted partition |
tool, possibly running under NetBSD's Linux emulation. Once the |
using the appropriate tool, possibly running under NetBSD's Linux |
filesystem has been populated, umount it. If desirable, the filesystem |
emulation. Once the file system has been populated, umount it. If |
can be converted to ext3 using tune2fs -j. It should now be possible to |
desirable, the file system can be converted to ext3 using tune2fs -j. |
boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels |
It should now be possible to boot the Linux guest domain, using one of |
available in the Xen binary distribution. |
the vmlinuz-\*-xenU kernels available in the Xen binary distribution. |
|
|
To get the linux console right, you need to add: |
To get the Linux console right, you need to add: |
|
|
extra = "xencons=tty1" |
extra = "xencons=tty1" |
|
|
to your configuration since not all linux distributions auto-attach a |
to your configuration since not all Linux distributions auto-attach a |
tty to the xen console. |
tty to the xen console. |
|
|
Creating an unprivileged Solaris domain (domU) |
## Creating a NetBSD HVM domU |
---------------------------------------------- |
|
|
Use type='hmv', probably. Use a GENERIC kernel within the disk image. |
|
|
|
## Creating a NetBSD PVH domU |
|
|
|
Use type='pvh'. |
|
|
Download an Opensolaris [release](http://opensolaris.org/os/downloads/) |
\todo Explain where the kernel comes from. |
or [development snapshot](http://genunix.org/) DVD image. Attach the DVD |
|
image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem |
|
image to your dom0 filesystem. |
Creating a Solaris domU |
|
----------------------- |
dom0# mkdir /root/solaris |
|
dom0# vnconfig vnd0 osol-1002-124-x86.iso |
See possibly outdated |
dom0# mount /dev/vnd0a /mnt |
[Solaris domU instructions](/ports/xen/howto-solaris/). |
|
|
## for a 64-bit guest |
|
dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris |
PCI passthrough: Using PCI devices in guest domains |
dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris |
--------------------------------------------------- |
|
|
## for a 32-bit guest |
NB: PCI passthrough only works on some Xen versions and as of 2020 it |
dom0# cp /mnt/boot/x86.microroot /root/solaris |
is not clear that it works on any version in pkgsrc. Reports |
dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris |
confirming or denying this notion should be sent to port-xen@. |
|
|
dom0# umount /mnt |
The dom0 can give other domains access to selected PCI |
|
devices. This can allow, for example, a non-privileged domain to have |
|
access to a physical network interface or disk controller. However, |
Keep the MAN.VND.4 configured. For some reason the boot process stalls |
keep in mind that giving a domain access to a PCI device most likely |
unless the DVD image is attached to the guest as a "phy" device. Create |
will give the domain read/write access to the whole physical memory, |
an initial configuration file with the following contents. Substitute |
as PCs don't have an IOMMU to restrict memory access to DMA-capable |
*/dev/wd0k* with an empty partition at least 8 GB large. |
device. Also, it's not possible to export ISA devices to non-dom0 |
|
domains, which means that the primary VGA adapter can't be exported. |
memory = 640 |
A guest domain trying to access the VGA registers will panic. |
name = 'solaris' |
|
disk = [ 'phy:/dev/wd0k,0,w' ] |
If the dom0 is NetBSD, it has to be running Xen 3.1, as support has |
disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ] |
not been ported to later versions at this time. |
vif = [ 'bridge=bridge0' ] |
|
kernel = '/root/solaris/unix' |
For a PCI device to be exported to a domU, is has to be attached to |
ramdisk = '/root/solaris/x86.microroot' |
the "pciback" driver in dom0. Devices passed to the dom0 via the |
# for a 64-bit guest |
pciback.hide boot parameter will attach to "pciback" instead of the |
extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom' |
usual driver. The list of devices is specified as "(bus:dev.func)", |
# for a 32-bit guest |
|
#extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom' |
|
|
|
|
|
Start the guest. |
|
|
|
dom0# xm create -c solaris.cfg |
|
Started domain solaris |
|
v3.3.2 chgset 'unavailable' |
|
SunOS Release 5.11 Version snv_124 64-bit |
|
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. |
|
Use is subject to license terms. |
|
Hostname: opensolaris |
|
Remounting root read/write |
|
Probing for device nodes ... |
|
WARNING: emlxs: ddi_modopen drv/fct failed: err 2 |
|
Preparing live image for use |
|
Done mounting Live image |
|
|
|
|
|
Make sure the network is configured. Note that it can take a minute for |
|
the xnf0 interface to appear. |
|
|
|
opensolaris console login: jack |
|
Password: jack |
|
Sun Microsystems Inc. SunOS 5.11 snv_124 November 2008 |
|
jack@opensolaris:~$ pfexec sh |
|
sh-3.2# ifconfig -a |
|
sh-3.2# exit |
|
|
|
|
|
Set a password for VNC and start the VNC server which provides the X11 |
|
display where the installation program runs. |
|
|
|
jack@opensolaris:~$ vncpasswd |
|
Password: solaris |
|
Verify: solaris |
|
jack@opensolaris:~$ cp .Xclients .vnc/xstartup |
|
jack@opensolaris:~$ vncserver :1 |
|
|
|
|
|
From a remote machine connect to the VNC server. Use `ifconfig xnf0` on |
|
the guest to find the correct IP address to use. |
|
|
|
remote$ vncviewer 172.18.2.99:1 |
|
|
|
|
|
It is also possible to launch the installation on a remote X11 display. |
|
|
|
jack@opensolaris:~$ export DISPLAY=172.18.1.1:0 |
|
jack@opensolaris:~$ pfexec gui-install |
|
|
|
|
|
After the GUI installation is complete you will be asked to reboot. |
|
Before that you need to determine the ZFS ID for the new boot filesystem |
|
and update the configuration file accordingly. Return to the guest |
|
console. |
|
|
|
jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs |
|
bootfs = 43 |
|
^C |
|
jack@opensolaris:~$ |
|
|
|
|
|
The final configuration file should look like this. Note in particular |
|
the last line. |
|
|
|
memory = 640 |
|
name = 'solaris' |
|
disk = [ 'phy:/dev/wd0k,0,w' ] |
|
vif = [ 'bridge=bridge0' ] |
|
kernel = '/root/solaris/unix' |
|
ramdisk = '/root/solaris/x86.microroot' |
|
extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"' |
|
|
|
|
|
Restart the guest to verify it works correctly. |
|
|
|
dom0# xm destroy solaris |
|
dom0# xm create -c solaris.cfg |
|
Using config file "./solaris.cfg". |
|
v3.3.2 chgset 'unavailable' |
|
Started domain solaris |
|
SunOS Release 5.11 Version snv_124 64-bit |
|
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. |
|
Use is subject to license terms. |
|
WARNING: emlxs: ddi_modopen drv/fct failed: err 2 |
|
Hostname: osol |
|
Configuring devices. |
|
Loading smf(5) service descriptions: 160/160 |
|
svccfg import warnings. See /var/svc/log/system-manifest-import:default.log . |
|
Reading ZFS config: done. |
|
Mounting ZFS filesystems: (6/6) |
|
Creating new rsa public/private host key pair |
|
Creating new dsa public/private host key pair |
|
|
|
osol console login: |
|
|
|
|
|
Using PCI devices in guest domains |
|
---------------------------------- |
|
|
|
The domain0 can give other domains access to selected PCI devices. This |
|
can allow, for example, a non-privileged domain to have access to a |
|
physical network interface or disk controller. However, keep in mind |
|
that giving a domain access to a PCI device most likely will give the |
|
domain read/write access to the whole physical memory, as PCs don't have |
|
an IOMMU to restrict memory access to DMA-capable device. Also, it's not |
|
possible to export ISA devices to non-domain0 domains (which means that |
|
the primary VGA adapter can't be exported. A guest domain trying to |
|
access the VGA registers will panic). |
|
|
|
This functionality is only available in NetBSD-5.1 (and later) domain0 |
|
and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as |
|
support has not been ported to later versions at this time. |
|
|
|
For a PCI device to be exported to a domU, is has to be attached to the |
|
`pciback` driver in domain0. Devices passed to the domain0 via the |
|
pciback.hide boot parameter will attach to `pciback` instead of the |
|
usual driver. The list of devices is specified as `(bus:dev.func)`, |
|
where bus and dev are 2-digit hexadecimal numbers, and func a |
where bus and dev are 2-digit hexadecimal numbers, and func a |
single-digit number: |
single-digit number: |
|
|
pciback.hide=(00:0a.0)(00:06.0) |
pciback.hide=(00:0a.0)(00:06.0) |
|
|
pciback devices should show up in the domain0's boot messages, and the |
pciback devices should show up in the dom0's boot messages, and the |
devices should be listed in the `/kern/xen/pci` directory. |
devices should be listed in the `/kern/xen/pci` directory. |
|
|
PCI devices to be exported to a domU are listed in the `pci` array of |
PCI devices to be exported to a domU are listed in the "pci" array of |
the domU's config file, with the format `'0000:bus:dev.func'` |
the domU's config file, with the format "0000:bus:dev.func". |
|
|
|
pci = [ '0000:00:06.0', '0000:00:0a.0' ] |
|
|
|
In the domU an "xpci" device will show up, to which one or more pci |
|
buses will attach. Then the PCI drivers will attach to PCI buses as |
|
usual. Note that the default NetBSD DOMU kernels do not have "xpci" |
|
or any PCI drivers built in by default; you have to build your own |
|
kernel to use PCI devices in a domU. Here's a kernel config example; |
|
note that only the "xpci" lines are unusual. |
|
|
|
include "arch/i386/conf/XEN3_DOMU" |
|
|
|
# Add support for PCI buses to the XEN3_DOMU kernel |
|
xpci* at xenbus ? |
|
pci* at xpci ? |
|
|
|
# PCI USB controllers |
|
uhci* at pci? dev ? function ? # Universal Host Controller (Intel) |
|
|
|
# USB bus support |
|
usb* at uhci? |
|
|
|
# USB Hubs |
|
uhub* at usb? |
|
uhub* at uhub? port ? configuration ? interface ? |
|
|
|
# USB Mass Storage |
|
umass* at uhub? port ? configuration ? interface ? |
|
wd* at umass? |
|
# SCSI controllers |
|
ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI |
|
|
|
# SCSI bus support (for both ahc and umass) |
|
scsibus* at scsi? |
|
|
|
# SCSI devices |
|
sd* at scsibus? target ? lun ? # SCSI disk drives |
|
cd* at scsibus? target ? lun ? # SCSI CD-ROM drives |
|
|
|
|
|
# Specific Issues |
|
|
|
## domU |
|
|
pci = [ '0000:00:06.0', '0000:00:0a.0' ] |
[NetBSD 5 is known to panic.](http://mail-index.netbsd.org/port-xen/2018/04/17/msg009181.html) |
|
(However, NetBSD 5 systems should be updated to a supported version.) |
|
|
|
# NetBSD as a domU in a VPS |
|
|
|
The bulk of the HOWTO is about using NetBSD as a dom0 on your own |
|
hardware. This section explains how to deal with Xen in a domU as a |
|
virtual private server where you do not control or have access to the |
|
dom0. This is not intended to be an exhaustive list of VPS providers; |
|
only a few are mentioned that specifically support NetBSD. |
|
|
|
VPS operators provide varying degrees of access and mechanisms for |
|
configuration. The big issue is usually how one controls which kernel |
|
is booted, because the kernel is nominally in the dom0 file system (to |
|
which VPS users do not normally have access). A second issue is how |
|
to install NetBSD. |
|
A VPS user may want to compile a kernel for security updates, to run |
|
npf, run IPsec, or any other reason why someone would want to change |
|
their kernel. |
|
|
|
One approach is to have an administrative interface to upload a kernel, |
|
or to select from a prepopulated list. Other approaches are pygrub |
|
(deprecated) and pvgrub, which are ways to have a bootloader obtain a |
|
kernel from the domU file system. This is closer to a regular physical |
|
computer, where someone who controls a machine can replace the kernel. |
|
|
|
A second issue is multiple CPUs. With NetBSD 6, domUs support |
|
multiple vcpus, and it is typical for VPS providers to enable multiple |
|
CPUs for NetBSD domUs. |
|
|
|
## Complexities due to Xen changes |
|
|
|
Xen has many security advisories and people running Xen systems make |
|
different choices. |
|
|
|
### stub domains |
|
|
|
Some (Linux only?) dom0 systems use something called "stub domains" to |
|
isolate qemu from the dom0 system, as a security and reliabilty |
|
mechanism when running HVM domUs. Somehow, NetBSD's GENERIC kernel |
|
ends up using PIO for disks rather than DMA. Of course, all of this |
|
is emulated, but emulated PIO is unusably slow. This problem is not |
|
currently understood. |
|
|
|
### Grant tables |
|
|
|
There are multiple versions of using grant tables, and some security |
|
advisories have suggested disabling some versions. Some versions of |
|
NetBSD apparently only use specific versions and this can lead to |
|
"NetBSD current doesn't run on hosting provider X" situations. |
|
|
|
\todo Explain better. |
|
|
|
pvgrub |
|
------ |
|
|
|
pvgrub is a version of grub that uses PV operations instead of BIOS |
|
calls. It is booted from the dom0 as the domU kernel, and then reads |
|
/grub/menu.lst and loads a kernel from the domU file system. |
|
|
|
[Panix](http://www.panix.com/) lets users use pvgrub. Panix reports |
|
that pvgrub works with FFsv2 with 16K/2K and 32K/4K block/frag sizes |
|
(and hence with defaults from "newfs -O 2"). See [Panix's pvgrub |
|
page](http://www.panix.com/v-colo/grub.html), which describes only |
|
Linux but should be updated to cover NetBSD :-). |
|
|
|
[prgmr.com](http://prgmr.com/) also lets users with pvgrub to boot |
|
their own kernel. See then [prgmr.com NetBSD |
|
HOWTO](http://wiki.prgmr.com/mediawiki/index.php/NetBSD_as_a_DomU) |
|
(which is in need of updating). |
|
|
|
It appears that [grub's FFS |
|
code](http://xenbits.xensource.com/hg/xen-unstable.hg/file/bca284f67702/tools/libfsimage/ufs/fsys_ufs.c) |
|
does not support all aspects of modern FFS, but there are also reports |
|
that FFSv2 works fine. At prgmr, typically one has an ext2 or FAT |
|
partition for the kernel with the intent that grub can understand it, |
|
which leads to /netbsd not being the actual kernel. One must remember |
|
to update the special boot partition. |
|
|
|
pygrub |
|
------- |
|
|
|
pygrub runs in the dom0 and looks into the domU file system. This |
|
implies that the domU must have a kernel in a file system in a format |
|
known to pygrub. |
|
|
|
pygrub doesn't seem to work to load Linux images under NetBSD dom0, |
|
and is inherently less secure than pvgrub due to running inside dom0. For both these |
|
reasons, pygrub should not be used, and is only still present so that |
|
historical DomU images using it still work. |
|
|
|
As of 2014, pygrub seems to be of mostly historical |
|
interest. New DomUs should use pvgrub. |
|
|
|
Amazon |
|
------ |
|
|
In the domU an `xpci` device will show up, to which one or more pci |
See the [Amazon EC2 page](/amazon_ec2/). |
busses will attach. Then the PCI drivers will attach to PCI busses as |
|
usual. Note that the default NetBSD DOMU kernels do not have `xpci` or |
|
any PCI drivers built in by default; you have to build your own kernel |
|
to use PCI devices in a domU. Here's a kernel config example: |
|
|
|
include "arch/i386/conf/XEN3_DOMU" |
|
#include "arch/i386/conf/XENU" # in NetBSD 3.0 |
|
|
|
# Add support for PCI busses to the XEN3_DOMU kernel |
|
xpci* at xenbus ? |
|
pci* at xpci ? |
|
|
|
# Now add PCI and related devices to be used by this domain |
|
# USB Controller and Devices |
|
|
|
# PCI USB controllers |
|
uhci* at pci? dev ? function ? # Universal Host Controller (Intel) |
|
|
|
# USB bus support |
|
usb* at uhci? |
|
|
|
# USB Hubs |
|
uhub* at usb? |
|
uhub* at uhub? port ? configuration ? interface ? |
|
|
|
# USB Mass Storage |
|
umass* at uhub? port ? configuration ? interface ? |
|
wd* at umass? |
|
# SCSI controllers |
|
ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI |
|
|
|
# SCSI bus support (for both ahc and umass) |
|
scsibus* at scsi? |
|
|
|
# SCSI devices |
|
sd* at scsibus? target ? lun ? # SCSI disk drives |
|
cd* at scsibus? target ? lun ? # SCSI CD-ROM drives |
|
|
|
Links and further information |
|
============================= |
|
|
|
- The [HowTo on Installing into RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html) |
|
explains how to set up booting a dom0 with Xen using grub |
|
with NetBSD's RAIDframe. (This is obsolete with the use of |
|
NetBSD's native boot.) |
|
- An example of how to use NetBSD's native bootloader to load |
|
NetBSD/Xen instead of Grub can be found in the i386/amd64 boot(8) |
|
and boot.cfg(5) manpages. |
|