Diff for /wikisrc/ports/xen/howto.mdwn between versions 1.33 and 1.164

version 1.33, 2014/12/24 15:54:50 version 1.164, 2019/04/11 17:31:14
Line 1 Line 1
 Introduction  [[!meta title="Xen HowTo"]]
 ============  
   
 [![[Xen  Xen is a Type 1 hypervisor which supports running multiple guest operating
 screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png)  systems on a single physical machine. One uses the Xen kernel to control the
   CPU, memory and console, a dom0 operating system which mediates access to
 Xen is a virtual machine monitor or hypervisor for x86 hardware  other hardware (e.g., disks, network, USB), and one or more domU operating
 (i686-class or higher), which supports running multiple guest  systems which operate in an unprivileged virtualized environment. IO requests
 operating systems on a single physical machine.  With Xen, one uses  from the domU systems are forwarded by the Xen hypervisor to the dom0 to be
 the Xen kernel to control the CPU, memory and console, a dom0  
 operating system which mediates access to other hardware (e.g., disks,  
 network, USB), and one or more domU operating systems which operate in  
 an unprivileged virtualized environment.  IO requests from the domU  
 systems are forwarded by the hypervisor (Xen) to the dom0 to be  
 fulfilled.  fulfilled.
   
 Xen supports two styles of guests.  The original is Para-Virtualized  Xen supports different styles of guest:
 (PV) which means that the guest OS does not attempt to access hardware  
 directly, but instead makes hypercalls to the hypervisor.  This is  [[!table data="""
 analogous to a user-space program making system calls.  (The dom0  Style of guest  |Supported by NetBSD
 operating system uses PV calls for some functions, such as updating  PV              |Yes (dom0, domU)
 memory mapping page tables, but has direct hardware access for disk  HVM             |Yes (domU)
 and network.)   PV guests must be specifically coded for Xen.  PVHVM           |No
   PVH             |No
 The more recent style is HVM, which means that the guest does not have  """]]
 code for Xen and need not be aware that it is running under Xen.  
 Attempts to access hardware registers are trapped and emulated.  This  In Para-Virtualized (PV) mode, the guest OS does not attempt to access
 style is less efficient but can run unmodified guests.  hardware directly, but instead makes hypercalls to the hypervisor; PV
   guests must be specifically coded for Xen. In HVM mode, no guest
 Generally any amd64 machine will work with Xen and PV guests.  In  modification is required; however, hardware support is required, such
 theory i386 computers without amd64 support can be used for Xen <=  as VT-x on Intel CPUs and SVM on AMD CPUs.
 4.2, but we have no recent reports of this working (this is a hint).  
 For HVM guests, the VT or VMX cpu feature (Intel) or SVM/HVM/VT  
 (amd64) is needed; "cpuctl identify 0" will show this.  TODO: Clean up  
 and check the above features.  
   
 At boot, the dom0 kernel is loaded as a module with Xen as the kernel.  At boot, the dom0 kernel is loaded as a module with Xen as the kernel.
 The dom0 can start one or more domUs.  (Booting is explained in detail  The dom0 can start one or more domUs.  (Booting is explained in detail
 in the dom0 section.)  in the dom0 section.)
   
 NetBSD supports Xen in that it can serve as dom0, be used as a domU,  
 and that Xen kernels and tools are available in pkgsrc.  This HOWTO  
 attempts to address both the case of running a NetBSD dom0 on hardware  
 and running domUs under it (NetBSD and other), and also running NetBSD  
 as a domU in a VPS.  
   
 Some versions of Xen support "PCI passthrough", which means that  
 specific PCI devices can be made available to a specific domU instead  
 of the dom0.  This can be useful to let a domU run X11, or access some  
 network interface or other peripheral.  
   
 Prerequisites  
 -------------  
   
 Installing NetBSD/Xen is not extremely difficult, but it is more  
 complex than a normal installation of NetBSD.  
 In general, this HOWTO is occasionally overly restrictive about how  
 things must be done, guiding the reader to stay on the established  
 path when there are no known good reasons to stray.  
   
 This HOWTO presumes a basic familiarity with the Xen system  This HOWTO presumes a basic familiarity with the Xen system
 architecture.  This HOWTO presumes familiarity with installing NetBSD  architecture, with installing NetBSD on i386/amd64 hardware, and with
 on i386/amd64 hardware and installing software from pkgsrc.  installing software from pkgsrc.  See also the [Xen
 See also the [Xen website](http://www.xenproject.org/).  website](http://www.xenproject.org/).
   
 History  
 -------  
   
 NetBSD used to support Xen2; this has been removed.  [[!toc]]
   
 Before NetBSD's native bootloader could support Xen, the use of  #Versions and Support
 grub was recommended.  If necessary, see the  
 [old grub information](/ports/xen/howto-grub/).  
   
 Versions of Xen and NetBSD  In NetBSD, Xen is provided in pkgsrc, via matching pairs of packages
 ==========================  
   
 Most of the installation concepts and instructions are independent  
 of Xen version and NetBSD version.  This section gives advice on  
 which version to choose.  Versions not in pkgsrc and older unsupported  
 versions of NetBSD are intentionally ignored.  
   
 Xen  
 ---  
   
 In NetBSD, xen is provided in pkgsrc, via matching pairs of packages  
 xenkernel and xentools.  We will refer only to the kernel versions,  xenkernel and xentools.  We will refer only to the kernel versions,
 but note that both packages must be installed together and must have  but note that both packages must be installed together and must have
 matching versions.  matching versions.
   
 xenkernel3 and xenkernel33 provide Xen 3.1 and 3.3.  These no longer  Versions available in pkgsrc:
 receive security patches and should not be used.  Xen 3.1 supports PCI  
 passthrough.  Xen 3.1 supports non-PAE on i386.  
   
 xenkernel41 provides Xen 4.1.  This is no longer maintained by Xen,  [[!table data="""
 but as of 2014-12 receives backported security patches.  It is a  Xen Version     |Package Name   |Xen CPU Support        |EOL'ed By Upstream
 reasonable although trailing-edge choice.  4.2             |xenkernel42    |32bit, 64bit           |Yes
   4.5             |xenkernel45    |64bit                  |Yes
   4.6             |xenkernel46    |64bit                  |Partially
   4.8             |xenkernel48    |64bit                  |No
   4.11            |xenkernel411   |64bit                  |No
   """]]
   
   See also the [Xen Security Advisory page](http://xenbits.xen.org/xsa/).
   
   Multiprocessor (SMP) support in NetBSD differs depending on the domain:
   
   [[!table data="""
   Domain          |Supports SMP
   dom0            |No
   domU            |Yes
   """]]
   
 xenkernel42 provides Xen 4.2.  This is maintained by Xen, but old as  Note: NetBSD support is called XEN3. However, it does support Xen 4,
 of 2014-12.  because the hypercall interface has remained identical.
   
 Ideally newer versions of Xen will be added to pkgsrc.  Architecture
   ------------
   
 Note that NetBSD support is called XEN3.  It works with 3.1 through  Xen itself runs on x86_64 hardware.
 4.2 because the hypercall interface has been stable.  
   
 Xen command program  The dom0 system, plus each domU, can be either i386PAE or amd64.
 -------------------  i386 without PAE is not supported.
   
 Early Xen used a program called "xm" to manipulate the system from the  The standard approach is to use NetBSD/amd64 for the dom0.
 dom0.  Starting in 4.1, a replacement program with similar behavior  
 called "xl" is provided.  In 4.2 and later, "xl" is preferred.  4.4 is  
 the last version that has "xm".  
   
 NetBSD  To use an i386PAE dom0, one must build or obtain a 64bit Xen kernel and
 ------  install it on the system.
   
 The netbsd-5, netbsd-6, netbsd-7, and -current branches are all  For domUs, i386PAE is considered as
 reasonable choices, with more or less the same considerations for  [faster](https://lists.xen.org/archives/html/xen-devel/2012-07/msg00085.html)
 non-Xen use.  Therefore, netbsd-6 is recommended as the stable version  than amd64.
 of the most recent release for production use.  For those wanting to  
 learn Xen or without production stability concerns, netbsd-7 is likely  
 most appropriate.  
   
 As of NetBSD 6, a NetBSD domU will support multiple vcpus.  There is  
 no SMP support for NetBSD as dom0.  (The dom0 itself doesn't really  
 need SMP; the lack of support is really a problem when using a dom0 as  
 a normal computer.)  
   
 Architecture  #Creating a dom0
 ------------  
   
 Xen itself can run on i386 or amd64 machines.  (Practically, almost  
 any computer where one would want to run Xen supports amd64.)  If  
 using an i386 NetBSD kernel for the dom0, PAE is required (PAE  
 versions are built by default).  While i386 dom0 works fine, amd64 is  
 recommended as more normal.  
   
 Xen 4.2 is the last version to support i386 as a host.  TODO: Clarify  
 if this is about the CPU having to be amd64, or about the dom0 kernel  
 having to be amd64.  
   
 One can then run i386 domUs and amd64 domUs, in any combination.  If  
 running an i386 NetBSD kernel as a domU, the PAE version is required.  
 (Note that emacs (at least) fails if run on i386 with PAE when built  
 without, and vice versa, presumably due to bugs in the undump code.)  
   
 Recommendation  
 --------------  
   
 Therefore, this HOWTO recommends running xenkernel42 (and xentools42),  In order to install a NetBSD as a dom0, one must first install a normal
 xl, the NetBSD 6 stable branch, and to use an amd64 kernel as the  NetBSD system, and then pivot the install to a dom0 install by changing
 dom0.  Either the i386 or amd64 of NetBSD may be used as domUs.  the kernel and boot configuration.
   
 NetBSD as a dom0  
 ================  
   
 NetBSD can be used as a dom0 and works very well.  The following  
 sections address installation, updating NetBSD, and updating Xen.  
 Note that it doesn't make sense to talk about installing a dom0 OS  
 without also installing Xen itself.  We first address installing  
 NetBSD, which is not yet a dom0, and then adding Xen, pivoting the  
 NetBSD install to a dom0 install by just changing the kernel and boot  
 configuration.  
   
 Styles of dom0 operation  
 ------------------------  
   
 There are two basic ways to use Xen.  The traditional method is for  
 the dom0 to do absolutely nothing other than providing support to some  
 number of domUs.  Such a system was probably installed for the sole  
 purpose of hosting domUs, and sits in a server room on a UPS.  
   
 The other way is to put Xen under a normal-usage computer, so that the  
 dom0 is what the computer would have been without Xen, perhaps a  
 desktop or laptop.  Then, one can run domUs at will.  Purists will  
 deride this as less secure than the previous approach, and for a  
 computer whose purpose is to run domUs, they are right.  But Xen and a  
 dom0 (without domUs) is not meaingfully less secure than the same  
 things running without Xen.  One can boot Xen or boot regular NetBSD  
 alternately with little problems, simply refraining from starting the  
 Xen daemons when not running Xen.  
   
 Note that NetBSD as dom0 does not support multiple CPUs.  This will  In 2018-05, trouble booting a dom0 was reported with 256M of RAM: with
 limit the performance of the Xen/dom0 workstation approach.  512M it worked reliably.  This does not make sense, but if you see
   "not ELF" after Xen boots, try increasing dom0 RAM.
   
 Installation of NetBSD  Installation of NetBSD
 ----------------------  ----------------------
   
 First,  [Install NetBSD/amd64](/guide/inst/)
 [install NetBSD/amd64](/guide/inst/)  
 just as you would if you were not using Xen.  just as you would if you were not using Xen.
 However, the partitioning approach is very important.  
   
 If you want to use RAIDframe for the dom0, there are no special issues  
 for Xen.  Typically one provides RAID storage for the dom0, and the  
 domU systems are unaware of RAID.  The 2nd-stage loader bootxx_* skips  
 over a RAID1 header to find /boot from a filesystem within a RAID  
 partition; this is no different when booting Xen.  
   
 There are 4 styles of providing backing storage for the virtual disks  
 used by domUs: raw partitions, LVM, file-backed vnd(4), and SAN,  
   
 With raw partitions, one has a disklabel (or gpt) partition sized for  
 each virtual disk to be used by the domU.  (If you are able to predict  
 how domU usage will evolve, please add an explanation to the HOWTO.  
 Seriously, needs tend to change over time.)  
   
 One can use [lvm(8)](/guide/lvm/) to create logical devices to use  
 for domU disks.  This is almost as efficient as raw disk partitions  
 and more flexible.  Hence raw disk partitions should typically not  
 be used.  
   
 One can use files in the dom0 filesystem, typically created by dd'ing  
 /dev/zero to create a specific size.  This is somewhat less efficient,  
 but very convenient, as one can cp the files for backup, or move them  
 between dom0 hosts.  
   
 Finally, in theory one can place the files backing the domU disks in a  
 SAN.  (This is an invitation for someone who has done this to add a  
 HOWTO page.)  
   
 Installation of Xen  Installation of Xen
 -------------------  -------------------
   
 In the dom0, install sysutils/xenkernel42 and sysutils/xentools42 from  We will consider that you chose to use Xen 4.8, with NetBSD/amd64 as
 pkgsrc (or another matching pair).  dom0. In the dom0, install xenkernel48 and xentools48 from pkgsrc.
 See [the pkgsrc  
 documentation](http://www.NetBSD.org/docs/pkgsrc/) for help with pkgsrc.  
   
 For Xen 3.1, support for HVM guests is in sysutils/xentool3-hvm.  More  
 recent versions have HVM support integrated in the main xentools  
 package.  It is entirely reasonable to run only PV guests.  
   
 Next you need to install the selected Xen kernel itself, which is  
 installed by pkgsrc as "/usr/pkg/xen*-kernel/xen.gz".  Copy it to /.  
 For debugging, one may copy xen-debug.gz; this is conceptually similar  
 to DIAGNOSTIC and DEBUG in NetBSD.  xen-debug.gz is basically only  
 useful with a serial console.  Then, place a NetBSD XEN3_DOM0 kernel  
 in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz  
 of a NetBSD build.  Both xen and NetBSD may be left compressed.  (If  
 using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.)  
   
 In a dom0 kernel, kernfs is mandatory for xend to comunicate with the  
 kernel, so ensure that /kern is in fstab.  
   
 Because you already installed NetBSD, you have a working boot setup  Once this is done, install the Xen kernel itself:
 with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the  
 beginning of your root filesystem, /boot present, and likely  
 /boot.cfg.  (If not, fix before continuing!)  
   
 See boot.cfg(5) for an example.  The basic line is  [[!template id=programlisting text="""
   # cp /usr/pkg/xen48-kernel/xen.gz /
   """]]
   
 "menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M"  Then, place a NetBSD XEN3_DOM0 kernel in the `/` directory. Such kernel
   can either be compiled manually, or downloaded from the NetBSD FTP, for
   example at:
   
 which specifies that the dom0 should have 256M, leaving the rest to be  [[!template id=programlisting text="""
 allocated for domUs.  ftp.netbsd.org/pub/NetBSD/NetBSD-8.0/amd64/binary/kernel/netbsd-XEN3_DOM0.gz
   """]]
   
 As with non-Xen systems, you should have a line to boot /netbsd (a  Add a line to /boot.cfg to boot Xen:
 kernel that works without Xen) and fallback versions of the non-Xen  
 kernel, Xen, and the dom0 kernel.  
   
 The [HowTo on Installing into  [[!template id=filecontent name="/boot.cfg" text="""
 RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html)  menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=512M
 explains how to set up booting a dom0 with Xen using grub with  """]]
 NetBSD's RAIDframe.  (This is obsolete with the use of NetBSD's native  
 boot.)  
   
 Configuring Xen  This specifies that the dom0 should have 512MB of ram, leaving the rest
 ---------------  to be allocated for domUs.  To use a serial console, use:
   
 Now, you have a system that will boot Xen and the dom0 kernel, and  [[!template id=filecontent name="/boot.cfg" text="""
 just run the dom0 kernel.  There will be no domUs, and none can be  menu=Xen:load /netbsd-XEN3_DOM0.gz;multiboot /xen.gz dom0_mem=512M console=com1 com1=9600,8n1
 started because you still have to configure the dom0 tools.  The  """]]
 daemons which should be run vary with Xen version and with whether one  
 is using xm or xl.  Note that xend is for supporting "xm", and should  
 only be used if you plan on using "xm".  Do NOT enable xend if you  
 plan on using "xl" as it will cause problems.  
   
 TODO: Give 3.1 advice (or remove it from pkgsrc).  which will use the first serial port for Xen (which counts starting
   from 1, unlike NetBSD which counts starting from 0), forcing
   speed/parity.  Because the NetBSD command line lacks a
   "console=pc" argument, it will use the default "xencons" console device,
   which directs the console I/O through Xen to the same console device Xen
   itself uses (in this case, the serial port).
   
 For 3.3 (and thus xm), add to rc.conf (but note that you should have  In an attempt to add performance, one can also add `dom0_max_vcpus=1 dom0_vcpus_pin`,
 installed 4.1 or 4.2):  to force only one vcpu to be provided (since NetBSD dom0 can't use
   more) and to pin that vcpu to a physical CPU. Xen has
   [many boot options](http://xenbits.xenproject.org/docs/4.8-testing/misc/xen-command-line.html),
   and other than dom0 memory and max_vcpus, they are generally not
   necessary.
   
         xend=YES  Copy the boot scripts into `/etc/rc.d`:
         xenbackendd=YES  
   
 For 4.1 (and thus xm; xl is believed not to work well), add to rc.conf:  [[!template id=programlisting text="""
   # cp /usr/pkg/share/examples/rc.d/xen* /etc/rc.d/
   """]]
   
         xend=YES  Enable `xencommons`:
         xencommons=YES  
   
 TODO: Explain why if xm is preferred on 4.1, rc.d/xendomains has xl.  [[!template id=filecontent name="/etc/rc.conf" text="""
 Or fix the package.  xencommons=YES
   """]]
   
 For 4.2 with xm, add to rc.conf  Now, reboot so that you are running a DOM0 kernel under Xen, rather
   than GENERIC without Xen.
         xend=YES  
         xencommons=YES  
   
 For 4.2 with xl (preferred), add to rc.conf:  
   
         TODO: explain if there is a xend replacement  
         xencommons=YES  
   
 TODO: Recommend for/against xen-watchdog.  TODO: Recommend for/against xen-watchdog.
   
 After you have configured the daemons and rebooted, run the following  Once the reboot is done, use `xl` to inspect Xen's boot messages,
 to inspect Xen's boot messages, available resources, and running  available resources, and running domains.  For example:
 domains:  
         xm dmesg  [[!template id=programlisting text="""
         xm info  # xl dmesg
         xm list  ... xen's boot info ...
   # xl info
   ... available memory, etc ...
   # xl list
   Name              Id  Mem(MB)  CPU  State  Time(s)  Console
   Domain-0           0       64    0  r----     58.1
   """]]
   
   Xen logs will be in /var/log/xen.
   
   ### Issues with xencommons
   
   `xencommons` starts `xenstored`, which stores data on behalf of dom0 and
   domUs.  It does not currently work to stop and start xenstored.
   Certainly all domUs should be shutdown first, following the sort order
   of the rc.d scripts.  However, the dom0 sets up state with xenstored,
   and is not notified when xenstored exits, leading to not recreating
   the state when the new xenstored starts.  Until there's a mechanism to
   make this work, one should not expect to be able to restart xenstored
   (and thus xencommons).  There is currently no reason to expect that
   this will get fixed any time soon.
   
   anita (for testing NetBSD)
   --------------------------
   
   With the setup so far (assuming 4.8/xl), one should be able to run
   anita (see pkgsrc/misc/py-anita) to test NetBSD releases, by doing (as
   root, because anita must create a domU):
   
   [[!template id=programlisting text="""
   anita --vmm=xl test file:///usr/obj/i386/
   """]]
   
   Xen-specific NetBSD issues
   --------------------------
   
   There are (at least) two additional things different about NetBSD as a
   dom0 kernel compared to hardware.
   
   One is that the module ABI is different because some of the #defines
   change, so one must build modules for Xen.  As of netbsd-7, the build
   system does this automatically.
   
   The other difference is that XEN3_DOM0 does not have exactly the same
   options as GENERIC.  While it is debatable whether or not this is a
   bug, users should be aware of this and can simply add missing config
   items if desired.
   
 Updating NetBSD in a dom0  Updating NetBSD in a dom0
 -------------------------  -------------------------
Line 318  Updating NetBSD in a dom0 Line 222  Updating NetBSD in a dom0
 This is just like updating NetBSD on bare hardware, assuming the new  This is just like updating NetBSD on bare hardware, assuming the new
 version supports the version of Xen you are running.  Generally, one  version supports the version of Xen you are running.  Generally, one
 replaces the kernel and reboots, and then overlays userland binaries  replaces the kernel and reboots, and then overlays userland binaries
 and adjusts /etc.  and adjusts `/etc`.
   
 Note that one must update both the non-Xen kernel typically used for  Note that one must update both the non-Xen kernel typically used for
 rescue purposes and the DOM0 kernel used with Xen.  rescue purposes and the DOM0 kernel used with Xen.
   
 To convert from grub to /boot, install an mbr bootblock with fdisk,  Converting from grub to /boot
 bootxx_ with installboot, /boot and /boot.cfg.  This really should be  -----------------------------
 no different than completely reinstalling boot blocks on a non-Xen  
 system.  
   
 Updating Xen versions  These instructions were used to convert a system from
 ---------------------  grub to /boot.  The system was originally installed in February of
   2006 with a RAID1 setup and grub to boot Xen 2, and has been updated
   over time.  Before these commands, it was running NetBSD 6 i386, Xen
   4.1 and grub, much like the message linked earlier in the grub
   section.
   
   [[!template id=programlisting text="""
   # Install MBR bootblocks on both disks.
   fdisk -i /dev/rwd0d
   fdisk -i /dev/rwd1d
   # Install NetBSD primary boot loader (/ is FFSv1) into RAID1 components.
   installboot -v /dev/rwd0d /usr/mdec/bootxx_ffsv1
   installboot -v /dev/rwd1d /usr/mdec/bootxx_ffsv1
   # Install secondary boot loader
   cp -p /usr/mdec/boot /
   # Create boot.cfg following earlier guidance:
   menu=Xen:load /netbsd-XEN3PAE_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=512M
   menu=Xen.ok:load /netbsd-XEN3PAE_DOM0.ok.gz console=pc;multiboot /xen.ok.gz dom0_mem=512M
   menu=GENERIC:boot
   menu=GENERIC single-user:boot -s
   menu=GENERIC.ok:boot netbsd.ok
   menu=GENERIC.ok single-user:boot netbsd.ok -s
   menu=Drop to boot prompt:prompt
   default=1
   timeout=30
   """]]
   
 Updating Xen is conceptually not difficult, but can run into all the  Upgrading Xen versions
 issues found when installing Xen.  Assuming migration from 4.1 to 4.2,  ---------------------
 remove the xenkernel41 and xentools41 packages and install the  
 xenkernel42 and xentools42 packages.  Copy the 4.2 xen.gz to /.  
   
 Ensure that the contents of /etc/rc.d/xen* are correct.  Enable the  
 correct set of daemons.  Ensure that the domU config files are valid  
 for the new version.  
   
   Minor version upgrades are trivial.  Just rebuild/replace the
   xenkernel version and copy the new xen.gz to `/` (where `/boot.cfg`
   references it), and reboot.
   
 Unprivileged domains (domU)  #Unprivileged domains (domU)
 ===========================  
   
 This section describes general concepts about domUs.  It does not  This section describes general concepts about domUs.  It does not
 address specific domU operating systems or how to install them.  The  address specific domU operating systems or how to install them.  The
 config files for domUs are typically in /usr/pkg/etc/xen, and are  config files for domUs are typically in `/usr/pkg/etc/xen`, and are
 typically named so that the file anme, domU name and the domU's host  typically named so that the file name, domU name and the domU's host
 name match.  name match.
   
 The domU is provided with cpu and memory by Xen, configured by the  The domU is provided with CPU and memory by Xen, configured by the
 dom0.  The domU is provided with disk and network by the dom0,  dom0.  The domU is provided with disk and network by the dom0,
 mediated by Xen, and configured in the dom0.  mediated by Xen, and configured in the dom0.
   
 Entropy in domUs can be an issue; physical disks and network are on  Entropy in domUs can be an issue; physical disks and network are on
 the dom0.  NetBSD's /dev/random system works, but is often challenged.  the dom0.  NetBSD's /dev/random system works, but is often challenged.
   
   Config files
   ------------
   
   See /usr/pkg/share/examples/xen/xlexample*,
   for a small number of well-commented examples, mostly for running
   GNU/Linux.
   
   The following is an example minimal domain configuration file. The domU
   serves as a network file server.
   
   [[!template id=filecontent name="/usr/pkg/etc/xen/foo" text="""
   name = "domU-id"
   kernel = "/netbsd-XEN3PAE_DOMU-i386-foo.gz"
   memory = 1024
   vif = [ 'mac=aa:00:00:d1:00:09,bridge=bridge0' ]
   disk = [ 'file:/n0/xen/foo-wd0,0x0,w',
            'file:/n0/xen/foo-wd1,0x1,w' ]
   """]]
   
   The domain will have name given in the `name` setting.  The kernel has the
   host/domU name in it, so that on the dom0 one can update the various
   domUs independently.  The `vif` line causes an interface to be provided,
   with a specific mac address (do not reuse MAC addresses!), in bridge
   mode.  Two disks are provided, and they are both writable; the bits
   are stored in files and Xen attaches them to a vnd(4) device in the
   dom0 on domain creation.  The system treats xbd0 as the boot device
   without needing explicit configuration.
   
   By convention, domain config files are kept in `/usr/pkg/etc/xen`.  Note
   that "xl create" takes the name of a config file, while other commands
   take the name of a domain.
   
   Examples of commands:
   
   [[!template id=programlisting text="""
   xl create /usr/pkg/etc/xen/foo
   xl console domU-id
   xl create -c /usr/pkg/etc/xen/foo
   xl shutdown domU-id
   xl list
   """]]
   
   Typing `^]` will exit the console session.  Shutting down a domain is
   equivalent to pushing the power button; a NetBSD domU will receive a
   power-press event and do a clean shutdown.  Shutting down the dom0
   will trigger controlled shutdowns of all configured domUs.
   
 CPU and memory  CPU and memory
 --------------  --------------
   
 A domain is provided with some number of vcpus, less than the  A domain is provided with some number of vcpus, less than the number
 number of cpus seen by the hypervisor.  For a dom0, this is controlled  of CPUs seen by the hypervisor. For a domU, it is controlled
 by the boot argument "dom0_max_vcpus=1".  For a domU, it is controlled  from the config file by the "vcpus = N" directive.
 from the config file.  
   A domain is provided with memory; this is controlled in the config
 A domain is provided with memory, In the straightforward case, the sum  file by "memory = N" (in megabytes).  In the straightforward case, the
 of the the memory allocated to the dom0 and all domUs must be less  sum of the the memory allocated to the dom0 and all domUs must be less
 than the available memory.  than the available memory.
   
 Xen also provides a "balloon" driver, which can be used to let domains  Xen also provides a "balloon" driver, which can be used to let domains
 use more memory temporarily.  TODO: Explain better, and explain how  use more memory temporarily.
 well it works with NetBSD.  
   
 Virtual disks  Virtual disks
 -------------  -------------
   
 With the file/vnd style, typically one creates a directory,  In domU config files, the disks are defined as a sequence of 3-tuples:
 e.g. /u0/xen, on a disk large enough to hold virtual disks for all  
 domUs.  Then, for each domU disk, one writes zeros to a file that then  
 serves to hold the virtual disk's bits; a suggested name is foo-xbd0  
 for the first virtual disk for the domU called foo.  Writing zeros to  
 the file serves two purposes.  One is that preallocating the contents  
 improves performance.  The other is that vnd on sparse files has  
 failed to work.  TODO: give working/notworking NetBSD versions for  
 sparse vnd.  Note that the use of file/vnd for Xen is not really  
 different than creating a file-backed virtual disk for some other  
 purpose, except that xentools handles the vnconfig commands.  
   
 With the lvm style, one creates logical devices.  They are then used   * The first element is "method:/path/to/disk". Common methods are
 similarly to vnds.     "file:" for a file-backed vnd, and "phy:" for something that is already
      a device, such as an LVM logical volume.
   
    * The second element is an artifact of how virtual disks are passed to
      Linux, and a source of confusion with NetBSD Xen usage.  Linux domUs
      are given a device name to associate with the disk, and values like
      "hda1" or "sda1" are common.  In a NetBSD domU, the first disk appears
      as xbd0, the second as xbd1, and so on.  However, xl demands a
      second argument.  The name given is converted to a major/minor by
      calling stat(2) on the name in /dev and this is passed to the domU.
      In the general case, the dom0 and domU can be different operating
      systems, and it is an unwarranted assumption that they have consistent
      numbering in /dev, or even that the dom0 OS has a /dev.  With NetBSD
      as both dom0 and domU, using values of 0x0 for the first disk and 0x1
      for the second works fine and avoids this issue.  For a GNU/Linux
      guest, one can create /dev/hda1 in /dev, or to pass 0x301 for
      /dev/hda1.
   
    * The third element is "w" for writable disks, and "r" for read-only
      disks.
   
   Example:
   [[!template id=filecontent name="/usr/pkg/etc/xen/foo" text="""
   disk = [ 'file:/n0/xen/foo-wd0,0x0,w' ]
   """]]
   
 Virtual Networking  Note that NetBSD by default creates only vnd[0123].  If you need more
 ------------------  than 4 total virtual disks at a time, run e.g. "./MAKEDEV vnd4" in the
   dom0.
   
 TODO: explain xvif concept, and that it's general.  Note that NetBSD by default creates only xbd[0123].  If you need more
   virtual disks in a domU, run e.g. "./MAKEDEV xbd4" in the domU.
   
 There are two normal styles: bridging and NAT.  Virtual Networking
   ------------------
   
 With bridging, the domU perceives itself to be on the same network as  Xen provides virtual Ethernets, each of which connects the dom0 and a
 the dom0.  For server virtualization, this is usually best.  domU.  For each virtual network, there is an interface "xvifN.M" in
   the dom0, and a matching interface xennetM (NetBSD name) in domU index N.
   The interfaces behave as if there is an Ethernet with two
   adapters connected.  From this primitive, one can construct various
   configurations.  We focus on two common and useful cases for which
   there are existing scripts: bridging and NAT.
   
   With bridging (in the example above), the domU perceives itself to be
   on the same network as the dom0.  For server virtualization, this is
   usually best.  Bridging is accomplished by creating a bridge(4) device
   and adding the dom0's physical interface and the various xvifN.0
   interfaces to the bridge.  One specifies "bridge=bridge0" in the domU
   config file.  The bridge must be set up already in the dom0; an
   example /etc/ifconfig.bridge0 is:
   
   [[!template id=filecontent name="/etc/ifconfig.bridge0" text="""
   create
   up
   !brconfig bridge0 add wm0
   """]]
   
 With NAT, the domU perceives itself to be behind a NAT running on the  With NAT, the domU perceives itself to be behind a NAT running on the
 dom0.  This is often appropriate when running Xen on a workstation.  dom0.  This is often appropriate when running Xen on a workstation.
   TODO: NAT appears to be configured by "vif = [ '' ]".
   
 One can construct arbitrary other configurations, but there is no  The MAC address specified is the one used for the interface in the new
 script support.  domain.  The interface in dom0 will use this address XOR'd with
   00:00:00:01:00:00.  Random MAC addresses are assigned if not given.
   
 Sizing domains  Starting domains automatically
 --------------  ------------------------------
   
 Modern x86 hardware has vast amounts of resources.  However, many  To start domains `domU-netbsd` and `domU-linux` at boot and shut them
 virtual servers can function just fine on far less.  A system with  down cleanly on dom0 shutdown, add the following in rc.conf:
 256M of RAM and a 4G disk can be a reasonable choice.  Note that it is  
 far easier to adjust virtual resources than physical ones.  For  
 memory, it's just a config file edit and a reboot.  For disk, one can  
 create a new file and vnconfig it (or lvm), and then dump/restore,  
 just like updating physical disks, but without having to be there and  
 without those pesky connectors.  
   
 Config files  [[!template id=filecontent name="/etc/rc.conf" text="""
 ------------  xendomains="domU-netbsd domU-linux"
   """]]
   
 TODO: give example config files.   Use both lvm and vnd.  #Creating a domU
   
 TODO: explain the mess with 3 arguments for disks and how to cope (0x1).  Creating domUs is almost entirely independent of operating system.  We
   have already presented the basics of config files.  Note that you must
 Starting domains  have already completed the dom0 setup so that "xl list" works.
 ----------------  
   
 TODO: Explain "xm start" and "xl start".  Explain rc.d/xendomains.  Creating a NetBSD domU
   ----------------------
   
 TODO: Explain why 4.1 rc.d/xendomains has xl, when one should use xm  See the earlier config file, and adjust memory.  Decide on how much
 on 4.1.  storage you will provide, and prepare it (file or LVM).
   
 Creating specific unprivileged domains (domU)  While the kernel will be obtained from the dom0 file system, the same
 =============================================  file should be present in the domU as /netbsd so that tools like
   savecore(8) can work.   (This is helpful but not necessary.)
   
 Creating domUs is almost entirely independent of operating system.  We  The kernel must be specifically for Xen and for use as a domU.  The
 first explain NetBSD, and then differences for Linux and Solaris.  i386 and amd64 provide the following kernels:
   
 Creating an unprivileged NetBSD domain (domU)          i386 XEN3PAE_DOMU
 ---------------------------------------------          amd64 XEN3_DOMU
   
 Once you have *domain0* running, you need to start the xen tool daemon  This will boot NetBSD, but this is not that useful if the disk is
 (`/usr/pkg/share/examples/rc.d/xend start`) and the xen backend daemon  empty.  One approach is to unpack sets onto the disk outside of xen
 (`/usr/pkg/share/examples/rc.d/xenbackendd start` for Xen3\*,  (by mounting it, just as you would prepare a physical disk for a
 `/usr/pkg/share/examples/rc.d/xencommons start` for Xen4.\*). Make sure  system you can't run the installer on).
 that `/dev/xencons` and `/dev/xenevt` exist before starting `xend`. You  
 can create them with this command:  
   
     # cd /dev && sh MAKEDEV xen  
   
 xend will write logs to `/var/log/xend.log` and  
 `/var/log/xend-debug.log`. You can then control xen with the xm tool.  
 'xm list' will show something like:  
   
     # xm list  
     Name              Id  Mem(MB)  CPU  State  Time(s)  Console  
     Domain-0           0       64    0  r----     58.1  
   
 'xm create' allows you to create a new domain. It uses a config file in  
 PKG\_SYSCONFDIR for its parameters. By default, this file will be in  
 `/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which  
 will be executed in the new domain (this kernel is in the *domain0* file  
 system, not on the new domain virtual disk; but please note, you should  
 install the same kernel into *domainU* as `/netbsd` in order to make  
 your system tools, like savecore(8), work). A suitable kernel is  
 provided as part of the i386 and amd64 binary sets: XEN3\_DOMU.  
   
 Here is an /usr/pkg/etc/xen/nbsd example config file:  
   
     #  -*- mode: python; -*-  
     #============================================================================  
     # Python defaults setup for 'xm create'.  
     # Edit this file to reflect the configuration of your system.  
     #============================================================================  
   
     #----------------------------------------------------------------------------  
     # Kernel image file. This kernel will be loaded in the new domain.  
     kernel = "/home/bouyer/netbsd-XEN3_DOMU"  
     #kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"  
   
     # Memory allocation (in megabytes) for the new domain.  
     memory = 128  
   
     # A handy name for your new domain. This will appear in 'xm list',  
     # and you can use this as parameters for xm in place of the domain  
     # number. All domains must have different names.  
     #  
     name = "nbsd"  
   
     # The number of virtual CPUs this domain has.  
     #  
     vcpus = 1  
   
     #----------------------------------------------------------------------------  
     # Define network interfaces for the new domain.  
   
     # Number of network interfaces (must be at least 1). Default is 1.  
     nics = 1  
   
     # Define MAC and/or bridge for the network interfaces.  
     #  
     # The MAC address specified in ``mac'' is the one used for the interface  
     # in the new domain. The interface in domain0 will use this address XOR'd  
     # with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random  
     # MACs are assigned if not given.  
     #  
     # ``bridge'' is a required parameter, which will be passed to the  
     # vif-script called by xend(8) when a new domain is created to configure  
     # the new xvif interface in domain0.  
     #  
     # In this example, the xvif is added to bridge0, which should have been  
     # set up prior to the new domain being created -- either in the  
     # ``network'' script or using a /etc/ifconfig.bridge0 file.  
     #  
     vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ]  
   
     #----------------------------------------------------------------------------  
     # Define the disk devices you want the domain to have access to, and  
     # what you want them accessible as.  
     #  
     # Each disk entry is of the form:  
     #  
     #   phy:DEV,VDEV,MODE  
     #  
     # where DEV is the device, VDEV is the device name the domain will see,  
     # and MODE is r for read-only, w for read-write.  You can also create  
     # file-backed domains using disk entries of the form:  
     #  
     #   file:PATH,VDEV,MODE  
     #  
     # where PATH is the path to the file used as the virtual disk, and VDEV  
     # and MODE have the same meaning as for ``phy'' devices.  
     #  
     # VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index),  
     # but it does for Linux.  
     # Worse, the device has to exist in /dev/ of domain0, because xm will  
     # try to stat() it. This means that in order to load a Linux guest OS  
     # from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ...  
     # on domain0, with the major/minor from Linux :(  
     # Alternatively it's possible to specify the device number in hex,  
     # e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ...  
   
     disk = [ 'phy:/dev/wd0e,0x1,w' ]  A second approach is to run an INSTALL kernel, which has a miniroot
     #disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ]  and can load sets from the network.  To do this, copy the INSTALL
     #disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ]  kernel to / and change the kernel line in the config file to:
   
     #----------------------------------------------------------------------------          kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"
     # Set the kernel command line for the new domain.  
   
     # Set root device. This one does matter for NetBSD  Then, start the domain as "xl create -c configfile".
     root = "xbd0"  
     # extra parameters passed to the kernel  
     # this is where you can set boot flags like -s, -a, etc ...  
     #extra = ""  
   
     #----------------------------------------------------------------------------  
     # Set according to whether you want the domain restarted when it exits.  
     # The default is False.  
     #autorestart = True  
   
     # end of nbsd config file ====================================================  
   
 When a new domain is created, xen calls the  
 `/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface  
 created in *domain0*. This can be used to automatically configure the  
 xvif?.? interfaces in *domain0*. In our example, these will be bridged  
 with the bridge0 device in *domain0*, but the bridge has to exist first.  
 To do this, create the file `/etc/ifconfig.bridge0` and make it look  
 like this:  
   
     create  
     !brconfig $int add ex0 up  
   
 (replace `ex0` with the name of your physical interface). Then bridge0  
 will be created on boot. See the bridge(4) man page for details.  
   
 So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a  
 working vif-bridge is also provided with xentools20) configuring:  
   
     #!/bin/sh  
     #============================================================================  
     # $NetBSD: howto.mdwn,v 1.32 2014/12/24 15:31:36 gdt Exp $  
     #  
     # /usr/pkg/etc/xen/vif-bridge  
     #  
     # Script for configuring a vif in bridged mode with a dom0 interface.  
     # The xend(8) daemon calls a vif script when bringing a vif up or down.  
     # The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp  
     # in the ``vif-script'' field.  
     #  
     # Usage: vif-bridge up|down [var=value ...]  
     #  
     # Actions:  
     #    up     Adds the vif interface to the bridge.  
     #    down   Removes the vif interface from the bridge.  
     #  
     # Variables:  
     #    domain name of the domain the interface is on (required).  
     #    vifq   vif interface name (required).  
     #    mac    vif MAC address (required).  
     #    bridge bridge to add the vif to (required).  
     #  
     # Example invocation:  
     #  
     # vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0  
     #  
     #============================================================================  
   
     # Exit if anything goes wrong  
     set -e  
   
     echo "vif-bridge $*"  
   
     # Operation name.  
     OP=$1; shift  
   
     # Pull variables in args into environment  
     for arg ; do export "${arg}" ; done  
   
     # Required parameters. Fail if not set.  
     domain=${domain:?}  
     vif=${vif:?}  
     mac=${mac:?}  
     bridge=${bridge:?}  
   
     # Optional parameters. Set defaults.  
     ip=${ip:-''}   # default to null (do nothing)  
   
     # Are we going up or down?  
     case $OP in  
     up) brcmd='add' ;;  
     down)   brcmd='delete' ;;  
     *)  
         echo 'Invalid command: ' $OP  
         echo 'Valid commands are: up, down'  
         exit 1  
         ;;  
     esac  
   
     # Don't do anything if the bridge is "null".  
     if [ "${bridge}" = "null" ] ; then  
         exit  
     fi  
   
     # Don't do anything if the bridge doesn't exist.  
     if ! ifconfig -l | grep "${bridge}" >/dev/null; then  
         exit  
     fi  
   
     # Add/remove vif to/from bridge.  
     ifconfig x${vif} $OP  
     brconfig ${bridge} ${brcmd} x${vif}  
   
 Now, running  
   
     xm create -c /usr/pkg/etc/xen/nbsd  
   
 should create a domain and load a NetBSD kernel in it. (Note: `-c`  
 causes xm to connect to the domain's console once created.) The kernel  
 will try to find its root file system on xbd0 (i.e., wd0e) which hasn't  
 been created yet. wd0e will be seen as a disk device in the new domain,  
 so it will be 'sub-partitioned'. We could attach a ccd to wd0e in  
 *domain0* and partition it, newfs and extract the NetBSD/i386 or amd64  
 tarballs there, but there's an easier way: load the  
 `netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets.  
 Like other install kernels, it contains a ramdisk with sysinst, so you  
 can install NetBSD using sysinst on your new domain.  
   
 If you want to install NetBSD/Xen with a CDROM image, the following line  Alternatively, if you want to install NetBSD/Xen with a CDROM image, the following
 should be used in the `/usr/pkg/etc/xen/nbsd` file:  line should be used in the config file.
   
     disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]      disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]
   
 After booting the domain, the option to install via CDROM may be  After booting the domain, the option to install via CDROM may be
 selected. The CDROM device should be changed to `xbd1d`.  selected.  The CDROM device should be changed to `xbd1d`.
   
 Once done installing, `halt -p` the new domain (don't reboot or halt, it  Once done installing, "halt -p" the new domain (don't reboot or halt,
 would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the  it would reload the INSTALL_XEN3_DOMU kernel even if you changed the
 config file), switch the config file back to the XEN3\_DOMU kernel, and  config file), switch the config file back to the XEN3_DOMU kernel,
 start the new domain again. Now it should be able to use `root on xbd0a`  and start the new domain again. Now it should be able to use "root on
 and you should have a second, functional NetBSD system on your xen  xbd0a" and you should have a, functional NetBSD domU.
 installation.  
   
   TODO: check if this is still accurate.
 When the new domain is booting you'll see some warnings about *wscons*  When the new domain is booting you'll see some warnings about *wscons*
 and the pseudo-terminals. These can be fixed by editing the files  and the pseudo-terminals. These can be fixed by editing the files
 `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in  `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in
Line 700  Finally, all screens must be commented o Line 489  Finally, all screens must be commented o
   
 It is also desirable to add  It is also desirable to add
   
     powerd=YES          powerd=YES
   
 in rc.conf. This way, the domain will be properly shut down if  in rc.conf. This way, the domain will be properly shut down if
 `xm shutdown -R` or `xm shutdown -H` is used on the domain0.  `xm shutdown -R` or `xm shutdown -H` is used on the dom0.
   
 Your domain should be now ready to work, enjoy.  It is not strictly necessary to have a kernel (as /netbsd) in the domU
   file system.  However, various programs (e.g. netstat) will use that
   kernel to look up symbols to read from kernel virtual memory.  If
   /netbsd is not the running kernel, those lookups will fail.  (This is
   not really a Xen-specific issue, but because the domU kernel is
   obtained from the dom0, it is far more likely to be out of sync or
   missing with Xen.)
   
 Creating an unprivileged Linux domain (domU)  Creating a Linux domU
 --------------------------------------------  ---------------------
   
 Creating unprivileged Linux domains isn't much different from  Creating unprivileged Linux domains isn't much different from
 unprivileged NetBSD domains, but there are some details to know.  unprivileged NetBSD domains, but there are some details to know.
Line 719  the example below) Line 514  the example below)
     disk = [ 'phy:/dev/wd0e,0x1,w' ]      disk = [ 'phy:/dev/wd0e,0x1,w' ]
   
 does matter to Linux. It wants a Linux device number here (e.g. 0x300  does matter to Linux. It wants a Linux device number here (e.g. 0x300
 for hda). Linux builds device numbers as: (major \<\< 8 + minor). So,  for hda).  Linux builds device numbers as: (major \<\< 8 + minor).
 hda1 which has major 3 and minor 1 on a Linux system will have device  So, hda1 which has major 3 and minor 1 on a Linux system will have
 number 0x301. Alternatively, devices names can be used (hda, hdb, ...)  device number 0x301.  Alternatively, devices names can be used (hda,
 as xentools has a table to map these names to devices numbers. To export  hdb, ...)  as xentools has a table to map these names to devices
 a partition to a Linux guest we can use:  numbers.  To export a partition to a Linux guest we can use:
   
     disk = [ 'phy:/dev/wd0e,0x300,w' ]          disk = [ 'phy:/dev/wd0e,0x300,w' ]
     root = "/dev/hda1 ro"          root = "/dev/hda1 ro"
   
 and it will appear as /dev/hda on the Linux system, and be used as root  and it will appear as /dev/hda on the Linux system, and be used as root
 partition.  partition.
   
 To install the Linux system on the partition to be exported to the guest  To install the Linux system on the partition to be exported to the
 domain, the following method can be used: install sysutils/e2fsprogs  guest domain, the following method can be used: install
 from pkgsrc. Use mke2fs to format the partition that will be the root  sysutils/e2fsprogs from pkgsrc.  Use mke2fs to format the partition
 partition of your Linux domain, and mount it. Then copy the files from a  that will be the root partition of your Linux domain, and mount it.
 working Linux system, make adjustments in `/etc` (fstab, network  Then copy the files from a working Linux system, make adjustments in
 config). It should also be possible to extract binary packages such as  `/etc` (fstab, network config).  It should also be possible to extract
 .rpm or .deb directly to the mounted partition using the appropriate  binary packages such as .rpm or .deb directly to the mounted partition
 tool, possibly running under NetBSD's Linux emulation. Once the  using the appropriate tool, possibly running under NetBSD's Linux
 filesystem has been populated, umount it. If desirable, the filesystem  emulation.  Once the file system has been populated, umount it.  If
 can be converted to ext3 using tune2fs -j. It should now be possible to  desirable, the file system can be converted to ext3 using tune2fs -j.
 boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels  It should now be possible to boot the Linux guest domain, using one of
 available in the Xen binary distribution.  the vmlinuz-\*-xenU kernels available in the Xen binary distribution.
   
 To get the linux console right, you need to add:  To get the Linux console right, you need to add:
   
     extra = "xencons=tty1"      extra = "xencons=tty1"
   
 to your configuration since not all linux distributions auto-attach a  to your configuration since not all Linux distributions auto-attach a
 tty to the xen console.  tty to the xen console.
   
 Creating an unprivileged Solaris domain (domU)  Creating a Solaris domU
 ----------------------------------------------  -----------------------
   
   See possibly outdated
   [Solaris domU instructions](/ports/xen/howto-solaris/).
   
 Download an Opensolaris [release](http://opensolaris.org/os/downloads/)  
 or [development snapshot](http://genunix.org/) DVD image. Attach the DVD  PCI passthrough: Using PCI devices in guest domains
 image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem  ---------------------------------------------------
 image to your dom0 filesystem.  
   The dom0 can give other domains access to selected PCI
     dom0# mkdir /root/solaris  devices. This can allow, for example, a non-privileged domain to have
     dom0# vnconfig vnd0 osol-1002-124-x86.iso  access to a physical network interface or disk controller.  However,
     dom0# mount /dev/vnd0a /mnt  keep in mind that giving a domain access to a PCI device most likely
   will give the domain read/write access to the whole physical memory,
     ## for a 64-bit guest  as PCs don't have an IOMMU to restrict memory access to DMA-capable
     dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris  device.  Also, it's not possible to export ISA devices to non-dom0
     dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris  domains, which means that the primary VGA adapter can't be exported.
   A guest domain trying to access the VGA registers will panic.
     ## for a 32-bit guest  
     dom0# cp /mnt/boot/x86.microroot /root/solaris  If the dom0 is NetBSD, it has to be running Xen 3.1, as support has
     dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris  not been ported to later versions at this time.
   
     dom0# umount /mnt  For a PCI device to be exported to a domU, is has to be attached to
             the "pciback" driver in dom0.  Devices passed to the dom0 via the
   pciback.hide boot parameter will attach to "pciback" instead of the
 Keep the MAN.VND.4 configured. For some reason the boot process stalls  usual driver.  The list of devices is specified as "(bus:dev.func)",
 unless the DVD image is attached to the guest as a "phy" device. Create  
 an initial configuration file with the following contents. Substitute  
 */dev/wd0k* with an empty partition at least 8 GB large.  
   
     memory = 640  
     name = 'solaris'  
     disk = [ 'phy:/dev/wd0k,0,w' ]  
     disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ]  
     vif = [ 'bridge=bridge0' ]  
     kernel = '/root/solaris/unix'  
     ramdisk = '/root/solaris/x86.microroot'  
     # for a 64-bit guest  
     extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom'  
     # for a 32-bit guest  
     #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom'  
             
   
 Start the guest.  
   
     dom0# xm create -c solaris.cfg  
     Started domain solaris  
                           v3.3.2 chgset 'unavailable'  
     SunOS Release 5.11 Version snv_124 64-bit  
     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.  
     Use is subject to license terms.  
     Hostname: opensolaris  
     Remounting root read/write  
     Probing for device nodes ...  
     WARNING: emlxs: ddi_modopen drv/fct failed: err 2  
     Preparing live image for use  
     Done mounting Live image  
             
   
 Make sure the network is configured. Note that it can take a minute for  
 the xnf0 interface to appear.  
   
     opensolaris console login: jack  
     Password: jack  
     Sun Microsystems Inc.   SunOS 5.11      snv_124 November 2008  
     jack@opensolaris:~$ pfexec sh  
     sh-3.2# ifconfig -a  
     sh-3.2# exit  
             
   
 Set a password for VNC and start the VNC server which provides the X11  
 display where the installation program runs.  
   
     jack@opensolaris:~$ vncpasswd  
     Password: solaris  
     Verify: solaris  
     jack@opensolaris:~$ cp .Xclients .vnc/xstartup  
     jack@opensolaris:~$ vncserver :1  
             
   
 From a remote machine connect to the VNC server. Use `ifconfig xnf0` on  
 the guest to find the correct IP address to use.  
   
     remote$ vncviewer 172.18.2.99:1  
             
   
 It is also possible to launch the installation on a remote X11 display.  
   
     jack@opensolaris:~$ export DISPLAY=172.18.1.1:0  
     jack@opensolaris:~$ pfexec gui-install  
              
   
 After the GUI installation is complete you will be asked to reboot.  
 Before that you need to determine the ZFS ID for the new boot filesystem  
 and update the configuration file accordingly. Return to the guest  
 console.  
   
     jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs  
                     bootfs = 43  
     ^C  
     jack@opensolaris:~$  
              
   
 The final configuration file should look like this. Note in particular  
 the last line.  
   
     memory = 640  
     name = 'solaris'  
     disk = [ 'phy:/dev/wd0k,0,w' ]  
     vif = [ 'bridge=bridge0' ]  
     kernel = '/root/solaris/unix'  
     ramdisk = '/root/solaris/x86.microroot'  
     extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"'  
              
   
 Restart the guest to verify it works correctly.  
   
     dom0# xm destroy solaris  
     dom0# xm create -c solaris.cfg  
     Using config file "./solaris.cfg".  
     v3.3.2 chgset 'unavailable'  
     Started domain solaris  
     SunOS Release 5.11 Version snv_124 64-bit  
     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.  
     Use is subject to license terms.  
     WARNING: emlxs: ddi_modopen drv/fct failed: err 2  
     Hostname: osol  
     Configuring devices.  
     Loading smf(5) service descriptions: 160/160  
     svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .  
     Reading ZFS config: done.  
     Mounting ZFS filesystems: (6/6)  
     Creating new rsa public/private host key pair  
     Creating new dsa public/private host key pair  
   
     osol console login:  
              
   
 Using PCI devices in guest domains  
 ----------------------------------  
   
 The domain0 can give other domains access to selected PCI devices. This  
 can allow, for example, a non-privileged domain to have access to a  
 physical network interface or disk controller. However, keep in mind  
 that giving a domain access to a PCI device most likely will give the  
 domain read/write access to the whole physical memory, as PCs don't have  
 an IOMMU to restrict memory access to DMA-capable device. Also, it's not  
 possible to export ISA devices to non-domain0 domains (which means that  
 the primary VGA adapter can't be exported. A guest domain trying to  
 access the VGA registers will panic).  
   
 This functionality is only available in NetBSD-5.1 (and later) domain0  
 and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as  
 support has not been ported to later versions at this time.  
   
 For a PCI device to be exported to a domU, is has to be attached to the  
 `pciback` driver in domain0. Devices passed to the domain0 via the  
 pciback.hide boot parameter will attach to `pciback` instead of the  
 usual driver. The list of devices is specified as `(bus:dev.func)`,  
 where bus and dev are 2-digit hexadecimal numbers, and func a  where bus and dev are 2-digit hexadecimal numbers, and func a
 single-digit number:  single-digit number:
   
     pciback.hide=(00:0a.0)(00:06.0)          pciback.hide=(00:0a.0)(00:06.0)
   
 pciback devices should show up in the domain0's boot messages, and the  pciback devices should show up in the dom0's boot messages, and the
 devices should be listed in the `/kern/xen/pci` directory.  devices should be listed in the `/kern/xen/pci` directory.
   
 PCI devices to be exported to a domU are listed in the `pci` array of  PCI devices to be exported to a domU are listed in the "pci" array of
 the domU's config file, with the format `'0000:bus:dev.func'`  the domU's config file, with the format "0000:bus:dev.func".
   
     pci = [ '0000:00:06.0', '0000:00:0a.0' ]  
   
 In the domU an `xpci` device will show up, to which one or more pci          pci = [ '0000:00:06.0', '0000:00:0a.0' ]
 busses will attach. Then the PCI drivers will attach to PCI busses as  
 usual. Note that the default NetBSD DOMU kernels do not have `xpci` or  
 any PCI drivers built in by default; you have to build your own kernel  
 to use PCI devices in a domU. Here's a kernel config example:  
   
     include         "arch/i386/conf/XEN3_DOMU"  In the domU an "xpci" device will show up, to which one or more pci
     #include         "arch/i386/conf/XENU"           # in NetBSD 3.0  buses will attach.  Then the PCI drivers will attach to PCI buses as
   usual.  Note that the default NetBSD DOMU kernels do not have "xpci"
   or any PCI drivers built in by default; you have to build your own
   kernel to use PCI devices in a domU.  Here's a kernel config example;
   note that only the "xpci" lines are unusual.
   
     # Add support for PCI busses to the XEN3_DOMU kernel          include         "arch/i386/conf/XEN3_DOMU"
     xpci* at xenbus ?  
     pci* at xpci ?  
   
     # Now add PCI and related devices to be used by this domain          # Add support for PCI buses to the XEN3_DOMU kernel
     # USB Controller and Devices          xpci* at xenbus ?
           pci* at xpci ?
   
     # PCI USB controllers          # PCI USB controllers
     uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)          uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)
   
     # USB bus support          # USB bus support
     usb*    at uhci?          usb*    at uhci?
   
     # USB Hubs          # USB Hubs
     uhub*   at usb?          uhub*   at usb?
     uhub*   at uhub? port ? configuration ? interface ?          uhub*   at uhub? port ? configuration ? interface ?
   
     # USB Mass Storage          # USB Mass Storage
     umass*  at uhub? port ? configuration ? interface ?          umass*  at uhub? port ? configuration ? interface ?
     wd*     at umass?          wd*     at umass?
     # SCSI controllers          # SCSI controllers
     ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI          ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI
   
     # SCSI bus support (for both ahc and umass)          # SCSI bus support (for both ahc and umass)
     scsibus* at scsi?          scsibus* at scsi?
   
     # SCSI devices          # SCSI devices
     sd*     at scsibus? target ? lun ?      # SCSI disk drives          sd*     at scsibus? target ? lun ?      # SCSI disk drives
     cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives          cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives
   
   
 NetBSD as a domU in a VPS  #NetBSD as a domU in a VPS
 =========================  
   
 The bulk of the HOWTO is about using NetBSD as a dom0 on your own  The bulk of the HOWTO is about using NetBSD as a dom0 on your own
 hardware.  This section explains how to deal with Xen in a domU as a  hardware.  This section explains how to deal with Xen in a domU as a
 virtual private server where you do not control or have access to the  virtual private server where you do not control or have access to the
 dom0.  dom0.  This is not intended to be an exhaustive list of VPS providers;
   only a few are mentioned that specifically support NetBSD.
   
   VPS operators provide varying degrees of access and mechanisms for
   configuration.  The big issue is usually how one controls which kernel
   is booted, because the kernel is nominally in the dom0 file system (to
   which VPS users do not normally have access).  A second issue is how
   to install NetBSD.
   A VPS user may want to compile a kernel for security updates, to run
   npf, run IPsec, or any other reason why someone would want to change
   their kernel.
   
   One approach is to have an administrative interface to upload a kernel,
   or to select from a prepopulated list.  Other approaches are pygrub
   (deprecated) and pvgrub, which are ways to have a bootloader obtain a
   kernel from the domU file system.  This is closer to a regular physical
   computer, where someone who controls a machine can replace the kernel.
   
   A second issue is multiple CPUs.  With NetBSD 6, domUs support
   multiple vcpus, and it is typical for VPS providers to enable multiple
   CPUs for NetBSD domUs.
   
 TODO: Perhaps reference panix, prmgr, amazon as interesting examples.  pygrub
   -------
   
   pygrub runs in the dom0 and looks into the domU file system.  This
   implies that the domU must have a kernel in a file system in a format
   known to pygrub.  As of 2014, pygrub seems to be of mostly historical
   interest.
   
   pvgrub
   ------
   
   pvgrub is a version of grub that uses PV operations instead of BIOS
   calls.  It is booted from the dom0 as the domU kernel, and then reads
   /grub/menu.lst and loads a kernel from the domU file system.
   
   [Panix](http://www.panix.com/) lets users use pvgrub.  Panix reports
   that pvgrub works with FFsv2 with 16K/2K and 32K/4K block/frag sizes
   (and hence with defaults from "newfs -O 2").  See [Panix's pvgrub
   page](http://www.panix.com/v-colo/grub.html), which describes only
   Linux but should be updated to cover NetBSD :-).
   
   [prgmr.com](http://prgmr.com/) also lets users with pvgrub to boot
   their own kernel.  See then [prgmr.com NetBSD
   HOWTO](http://wiki.prgmr.com/mediawiki/index.php/NetBSD_as_a_DomU)
   (which is in need of updating).
   
   It appears that [grub's FFS
   code](http://xenbits.xensource.com/hg/xen-unstable.hg/file/bca284f67702/tools/libfsimage/ufs/fsys_ufs.c)
   does not support all aspects of modern FFS, but there are also reports
   that FFSv2 works fine.  At prgmr, typically one has an ext2 or FAT
   partition for the kernel with the intent that grub can understand it,
   which leads to /netbsd not being the actual kernel.  One must remember
   to update the special boot partition.
   
   Amazon
   ------
   
 TODO: Somewhere, discuss pvgrub and py-grub to load the domU kernel  See the [Amazon EC2 page](/amazon_ec2/).
 from the domU filesystem.  

Removed from v.1.33  
changed lines
  Added in v.1.164


CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb