--- wikisrc/ports/xen/howto.mdwn 2020/11/13 20:41:56 1.177 +++ wikisrc/ports/xen/howto.mdwn 2020/11/15 14:31:58 1.180 @@ -8,14 +8,14 @@ systems which operate in an unprivileged from the domU systems are forwarded by the Xen hypervisor to the dom0 to be fulfilled. -Xen supports different styles of guests; see [PV on HVM](https://wiki.xen.org/wiki/PV_on_HVM) and [PVH(v2)](https://wiki.xenproject.org/wiki/PVH_(v2)_Domu) for upstream documentation. +Xen supports different styles of guests; see [PV on HVM](https://wiki.xen.org/wiki/PV_on_HVM) and [PVH(v2)](https://wiki.xenproject.org/wiki/PVH_(v2\)_Domu) for upstream documentation. [[!table data=""" Style of guest |Supported by NetBSD PV |Yes (dom0, domU) HVM |Yes (domU) PVHVM |current-only (domU) -PVHv2 |current-only (domU, dom0 not yet) +PVH |current-only (domU, dom0 not yet) """]] In Para-Virtualized (PV) mode, the guest OS does not attempt to access @@ -29,9 +29,13 @@ The dom0 runs qemu to emulate hardware. In PVHVM mode, the guest runs as HVM, but additionally can use PV drivers for efficiency. -In PVHv2H mode, operation is similar to PVHVM, except that qemu is not -run and thus the PV interfaces for console, disks, networking are the -only way to access these resources. +There have been two PVH modes: original PVH and PVHv2. Original PVH +was based on PV mode and is no longer relevant at all. PVHv2 is +basically lightweight HVM with PV drivers. A critical feature of it +is that qemu is not needed; the hypervisor can do the emulation that +is required. Thus, a dom0 can be PVHv2. + +The source code uses PVH and config files use pvh; this refers to PVHv2. At boot, the dom0 kernel is loaded as a module with Xen as the kernel. The dom0 can start one or more domUs. (Booting is explained in detail @@ -105,7 +109,7 @@ just as you would if you were not using Installation of Xen ------------------- -We will consider that you chose to use Xen 4.8, with NetBSD/amd64 as +We will consider that you chose to use Xen 4.13, with NetBSD/amd64 as dom0. In the dom0, install xenkernel48 and xentools48 from pkgsrc. Once this is done, install the Xen kernel itself: @@ -145,7 +149,7 @@ itself uses (in this case, the serial po In an attempt to add performance, one can also add `dom0_max_vcpus=1 dom0_vcpus_pin`, to force only one vcpu to be provided (since NetBSD dom0 can't use more) and to pin that vcpu to a physical CPU. Xen has -[many boot options](http://xenbits.xenproject.org/docs/4.8-testing/misc/xen-command-line.html), +[many boot options](http://xenbits.xenproject.org/docs/4.13-testing/misc/xen-command-line.html), and other than dom0 memory and max_vcpus, they are generally not necessary. @@ -196,7 +200,7 @@ this will get fixed any time soon. anita (for testing NetBSD) -------------------------- -With the setup so far (assuming 4.8/xl), one should be able to run +With the setup so far, one should be able to run anita (see pkgsrc/misc/py-anita) to test NetBSD releases, by doing (as root, because anita must create a domU): @@ -427,14 +431,14 @@ down cleanly on dom0 shutdown, add the f xendomains="domU-netbsd domU-linux" """]] -#Creating a domU +# Creating a domU Creating domUs is almost entirely independent of operating system. We have already presented the basics of config files. Note that you must have already completed the dom0 setup so that "xl list" works. -Creating a NetBSD domU ----------------------- +Creating a NetBSD PV domU +-------------------------- See the earlier config file, and adjust memory. Decide on how much storage you will provide, and prepare it (file or LVM). @@ -549,6 +553,17 @@ To get the Linux console right, you need to your configuration since not all Linux distributions auto-attach a tty to the xen console. +## Creating a NetBSD HVM domU + +Use type='hmv', probably. Use a GENERIC kernel within the disk image. + +## Creating a NetBSD PVH domU + +Use type='pvh'. + +\todo Explain where the kernel comes from. + + Creating a Solaris domU ----------------------- @@ -559,6 +574,10 @@ See possibly outdated PCI passthrough: Using PCI devices in guest domains --------------------------------------------------- +NB: PCI passthrough only works on some Xen versions and as of 2020 it +is not clear that it works on any version in pkgsrc. Reports +confirming or denying this notion should be sent to port-xen@. + The dom0 can give other domains access to selected PCI devices. This can allow, for example, a non-privileged domain to have access to a physical network interface or disk controller. However, @@ -660,6 +679,29 @@ A second issue is multiple CPUs. With N multiple vcpus, and it is typical for VPS providers to enable multiple CPUs for NetBSD domUs. +## Complexities due to Xen changes + +Xen has many security advisories and people running Xen systems make +different choices. + +### stub domains + +Some (Linux only?) dom0 systems use something called "stub domains" to +isolate qemu from the dom0 system, as a security and reliabilty +mechanism when running HVM domUs. Somehow, NetBSD's GENERIC kernel +ends up using PIO for disks rather than DMA. Of course, all of this +is emulated, but emulated PIO is unusably slow. This problem is not +currently understood. + +### Grant tables + +There are multiple versions of using grant tables, and some security +advisories have suggested disabling some versions. Some versions of +NetBSD apparently only use specific versions and this can lead to +"NetBSD current doesn't run on hosting provider X" situations. + +\todo Explain better. + pvgrub ------