--- wikisrc/ports/xen/howto.mdwn 2019/04/11 17:33:16 1.165 +++ wikisrc/ports/xen/howto.mdwn 2020/11/15 14:31:58 1.180 @@ -8,21 +8,34 @@ systems which operate in an unprivileged from the domU systems are forwarded by the Xen hypervisor to the dom0 to be fulfilled. -Xen supports different styles of guest: +Xen supports different styles of guests; see [PV on HVM](https://wiki.xen.org/wiki/PV_on_HVM) and [PVH(v2)](https://wiki.xenproject.org/wiki/PVH_(v2\)_Domu) for upstream documentation. [[!table data=""" Style of guest |Supported by NetBSD PV |Yes (dom0, domU) HVM |Yes (domU) -PVHVM |No -PVH |No +PVHVM |current-only (domU) +PVH |current-only (domU, dom0 not yet) """]] In Para-Virtualized (PV) mode, the guest OS does not attempt to access hardware directly, but instead makes hypercalls to the hypervisor; PV -guests must be specifically coded for Xen. In HVM mode, no guest -modification is required; however, hardware support is required, such -as VT-x on Intel CPUs and SVM on AMD CPUs. +guests must be specifically coded for Xen. + +In HVM mode, no guest modification is required; however, hardware +support is required, such as VT-x on Intel CPUs and SVM on AMD CPUs. +The dom0 runs qemu to emulate hardware. + +In PVHVM mode, the guest runs as HVM, but additionally can use PV +drivers for efficiency. + +There have been two PVH modes: original PVH and PVHv2. Original PVH +was based on PV mode and is no longer relevant at all. PVHv2 is +basically lightweight HVM with PV drivers. A critical feature of it +is that qemu is not needed; the hypervisor can do the emulation that +is required. Thus, a dom0 can be PVHv2. + +The source code uses PVH and config files use pvh; this refers to PVHv2. At boot, the dom0 kernel is loaded as a module with Xen as the kernel. The dom0 can start one or more domUs. (Booting is explained in detail @@ -35,7 +48,7 @@ website](http://www.xenproject.org/). [[!toc]] -#Versions and Support +# Versions and Support In NetBSD, Xen is provided in pkgsrc, via matching pairs of packages xenkernel and xentools. We will refer only to the kernel versions, @@ -45,12 +58,9 @@ matching versions. Versions available in pkgsrc: [[!table data=""" -Xen Version |Package Name |Xen CPU Support |EOL'ed By Upstream -4.2 |xenkernel42 |32bit, 64bit |Yes -4.5 |xenkernel45 |64bit |Yes -4.6 |xenkernel46 |64bit |Partially -4.8 |xenkernel48 |64bit |No -4.11 |xenkernel411 |64bit |No +Xen Version |Package Name |Xen CPU Support |xm? |EOL'ed By Upstream +4.11 |xenkernel411 |x86_64 | |No +4.13 |xenkernel413 |x86_64 | |No """]] See also the [Xen Security Advisory page](http://xenbits.xen.org/xsa/). @@ -63,27 +73,24 @@ dom0 |No domU |Yes """]] -Note: NetBSD support is called XEN3. However, it does support Xen 4, +Note: NetBSD support is called XEN3. However, it does support Xen 4, because the hypercall interface has remained identical. +Older Xen had a python-based management tool called xm, now replaced +by xl. + Architecture ------------ -Xen itself runs on x86_64 hardware. - -The dom0 system, plus each domU, can be either i386PAE or amd64. -i386 without PAE is not supported. - -The standard approach is to use NetBSD/amd64 for the dom0. +Xen 4.5 and later runs on x86_64 hardware (the NetBSD amd64 port). +There is a concept of Xen running on ARM, but there are no reports of this working with NetBSD. -To use an i386PAE dom0, one must build or obtain a 64bit Xen kernel and -install it on the system. +The dom0 system should be amd64. (Instructions for i386PAE dom0 have been removed from the HOWTO.) -For domUs, i386PAE is considered as -[faster](https://lists.xen.org/archives/html/xen-devel/2012-07/msg00085.html) -than amd64. +The domU can be i386PAE or amd64. +i386PAE at one point was considered as [faster](https://lists.xen.org/archives/html/xen-devel/2012-07/msg00085.html) than amd64. -#Creating a dom0 +# Creating a dom0 In order to install a NetBSD as a dom0, one must first install a normal NetBSD system, and then pivot the install to a dom0 install by changing @@ -102,7 +109,7 @@ just as you would if you were not using Installation of Xen ------------------- -We will consider that you chose to use Xen 4.8, with NetBSD/amd64 as +We will consider that you chose to use Xen 4.13, with NetBSD/amd64 as dom0. In the dom0, install xenkernel48 and xentools48 from pkgsrc. Once this is done, install the Xen kernel itself: @@ -142,7 +149,7 @@ itself uses (in this case, the serial po In an attempt to add performance, one can also add `dom0_max_vcpus=1 dom0_vcpus_pin`, to force only one vcpu to be provided (since NetBSD dom0 can't use more) and to pin that vcpu to a physical CPU. Xen has -[many boot options](http://xenbits.xenproject.org/docs/4.8-testing/misc/xen-command-line.html), +[many boot options](http://xenbits.xenproject.org/docs/4.13-testing/misc/xen-command-line.html), and other than dom0 memory and max_vcpus, they are generally not necessary. @@ -193,7 +200,7 @@ this will get fixed any time soon. anita (for testing NetBSD) -------------------------- -With the setup so far (assuming 4.8/xl), one should be able to run +With the setup so far, one should be able to run anita (see pkgsrc/misc/py-anita) to test NetBSD releases, by doing (as root, because anita must create a domU): @@ -330,7 +337,7 @@ will trigger controlled shutdowns of all CPU and memory -------------- -A domain is provided with some number of vcpus, less than the number +A domain is provided with some number of vcpus, up to the number of CPUs seen by the hypervisor. For a domU, it is controlled from the config file by the "vcpus = N" directive. @@ -424,14 +431,14 @@ down cleanly on dom0 shutdown, add the f xendomains="domU-netbsd domU-linux" """]] -#Creating a domU +# Creating a domU Creating domUs is almost entirely independent of operating system. We have already presented the basics of config files. Note that you must have already completed the dom0 setup so that "xl list" works. -Creating a NetBSD domU ----------------------- +Creating a NetBSD PV domU +-------------------------- See the earlier config file, and adjust memory. Decide on how much storage you will provide, and prepare it (file or LVM). @@ -546,6 +553,17 @@ To get the Linux console right, you need to your configuration since not all Linux distributions auto-attach a tty to the xen console. +## Creating a NetBSD HVM domU + +Use type='hmv', probably. Use a GENERIC kernel within the disk image. + +## Creating a NetBSD PVH domU + +Use type='pvh'. + +\todo Explain where the kernel comes from. + + Creating a Solaris domU ----------------------- @@ -556,6 +574,10 @@ See possibly outdated PCI passthrough: Using PCI devices in guest domains --------------------------------------------------- +NB: PCI passthrough only works on some Xen versions and as of 2020 it +is not clear that it works on any version in pkgsrc. Reports +confirming or denying this notion should be sent to port-xen@. + The dom0 can give other domains access to selected PCI devices. This can allow, for example, a non-privileged domain to have access to a physical network interface or disk controller. However, @@ -623,7 +645,14 @@ note that only the "xpci" lines are unus cd* at scsibus? target ? lun ? # SCSI CD-ROM drives -#NetBSD as a domU in a VPS +# Specific Issues + +## domU + +[NetBSD 5 is known to panic.](http://mail-index.netbsd.org/port-xen/2018/04/17/msg009181.html) +(However, NetBSD 5 systems should be updated to a supported version.) + +# NetBSD as a domU in a VPS The bulk of the HOWTO is about using NetBSD as a dom0 on your own hardware. This section explains how to deal with Xen in a domU as a @@ -650,13 +679,28 @@ A second issue is multiple CPUs. With N multiple vcpus, and it is typical for VPS providers to enable multiple CPUs for NetBSD domUs. -pygrub -------- +## Complexities due to Xen changes -pygrub runs in the dom0 and looks into the domU file system. This -implies that the domU must have a kernel in a file system in a format -known to pygrub. As of 2014, pygrub seems to be of mostly historical -interest. +Xen has many security advisories and people running Xen systems make +different choices. + +### stub domains + +Some (Linux only?) dom0 systems use something called "stub domains" to +isolate qemu from the dom0 system, as a security and reliabilty +mechanism when running HVM domUs. Somehow, NetBSD's GENERIC kernel +ends up using PIO for disks rather than DMA. Of course, all of this +is emulated, but emulated PIO is unusably slow. This problem is not +currently understood. + +### Grant tables + +There are multiple versions of using grant tables, and some security +advisories have suggested disabling some versions. Some versions of +NetBSD apparently only use specific versions and this can lead to +"NetBSD current doesn't run on hosting provider X" situations. + +\todo Explain better. pvgrub ------ @@ -684,6 +728,21 @@ partition for the kernel with the intent which leads to /netbsd not being the actual kernel. One must remember to update the special boot partition. +pygrub +------- + +pygrub runs in the dom0 and looks into the domU file system. This +implies that the domU must have a kernel in a file system in a format +known to pygrub. + +pygrub doesn't seem to work to load Linux images under NetBSD dom0, +and is inherently less secure than pvgrub due to running inside dom0. For both these +reasons, pygrub should not be used, and is only still present so that +historical DomU images using it still work. + +As of 2014, pygrub seems to be of mostly historical +interest. New DomUs should use pvgrub. + Amazon ------