Diff for /wikisrc/ports/xen/howto.mdwn between versions 1.48 and 1.75

version 1.48, 2014/12/26 20:00:44 version 1.75, 2015/01/17 01:32:12
Line 2  Introduction Line 2  Introduction
 ============  ============
   
 [![[Xen  [![[Xen
 screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png)  screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](http://www.netbsd.org/gallery/in-Action/hubertf-xen.png)
   
 Xen is a virtual machine monitor or hypervisor for x86 hardware  Xen is a hypervisor (or virtual machine monitor) for x86 hardware
 (i686-class or higher), which supports running multiple guest  (i686-class or higher), which supports running multiple guest
 operating systems on a single physical machine.  With Xen, one uses  operating systems on a single physical machine.  Xen is a Type 1 or
 the Xen kernel to control the CPU, memory and console, a dom0  bare-metal hypervisor; one uses the Xen kernel to control the CPU,
 operating system which mediates access to other hardware (e.g., disks,  memory and console, a dom0 operating system which mediates access to
 network, USB), and one or more domU operating systems which operate in  other hardware (e.g., disks, network, USB), and one or more domU
 an unprivileged virtualized environment.  IO requests from the domU  operating systems which operate in an unprivileged virtualized
 systems are forwarded by the hypervisor (Xen) to the dom0 to be  environment.  IO requests from the domU systems are forwarded by the
 fulfilled.  hypervisor (Xen) to the dom0 to be fulfilled.
   
 Xen supports two styles of guests.  The original is Para-Virtualized  Xen supports two styles of guests.  The original is Para-Virtualized
 (PV) which means that the guest OS does not attempt to access hardware  (PV) which means that the guest OS does not attempt to access hardware
Line 49  specific PCI devices can be made availab Line 49  specific PCI devices can be made availab
 of the dom0.  This can be useful to let a domU run X11, or access some  of the dom0.  This can be useful to let a domU run X11, or access some
 network interface or other peripheral.  network interface or other peripheral.
   
   NetBSD used to support Xen2; this has been removed.
   
 Prerequisites  Prerequisites
 -------------  -------------
   
Line 63  architecture.  This HOWTO presumes famil Line 65  architecture.  This HOWTO presumes famil
 on i386/amd64 hardware and installing software from pkgsrc.  on i386/amd64 hardware and installing software from pkgsrc.
 See also the [Xen website](http://www.xenproject.org/).  See also the [Xen website](http://www.xenproject.org/).
   
 History  
 -------  
   
 NetBSD used to support Xen2; this has been removed.  
   
 Before NetBSD's native bootloader could support Xen, the use of  
 grub was recommended.  If necessary, see the  
 [old grub information](/ports/xen/howto-grub/).  
   
 Versions of Xen and NetBSD  Versions of Xen and NetBSD
 ==========================  ==========================
   
Line 158  Build problems Line 151  Build problems
 Ideally, all versions of Xen in pkgsrc would build on all versions of  Ideally, all versions of Xen in pkgsrc would build on all versions of
 NetBSD on both i386 and amd64.  However, that isn't the case.  Besides  NetBSD on both i386 and amd64.  However, that isn't the case.  Besides
 aging code and aging compilers, qemu (included in xentools for HVM  aging code and aging compilers, qemu (included in xentools for HVM
 support) is difficult to build.  The following are known to fail:  support) is difficult to build.  The following are known to work or FAIL:
   
         xenkernel3 netbsd-6 i386  
         xentools42 netbsd-6 i386   
   
 The following are known to work:  
   
           xenkernel3 netbsd-5 amd64
           xentools3 netbsd-5 amd64
           xentools3=hvm netbsd-5 amd64 ????
           xenkernel33 netbsd-5 amd64
           xentools33 netbsd-5 amd64
         xenkernel41 netbsd-5 amd64          xenkernel41 netbsd-5 amd64
         xentools41 netbsd-5 amd64          xentools41 netbsd-5 amd64
           xenkernel42 netbsd-5 amd64
           xentools42 netbsd-5 amd64
   
           xenkernel3 netbsd-6 i386 FAIL
           xentools3 netbsd-6 i386
           xentools3-hvm netbsd-6 i386 FAIL (dependencies fail)
           xenkernel33 netbsd-6 i386
           xentools33 netbsd-6 i386
         xenkernel41 netbsd-6 i386          xenkernel41 netbsd-6 i386
         xentools41 netbsd-6 i386          xentools41 netbsd-6 i386
           xenkernel42 netbsd-6 i386
           xentools42 netbsd-6 i386 *MIXED
   
           (all 3 and 33 seem to FAIL)
           xenkernel41 netbsd-7 i386
           xentools41 netbsd-7 i386
           xenkernel42 netbsd-7 i386
           xentools42 netbsd-7 i386 ??FAIL
   
   (*On netbsd-6 i386, there is a xentools42 in the 2014Q3 official builds,
   but it does not build for gdt.)
   
 NetBSD as a dom0  NetBSD as a dom0
 ================  ================
Line 204  alternately with little problems, simply Line 216  alternately with little problems, simply
 Xen daemons when not running Xen.  Xen daemons when not running Xen.
   
 Note that NetBSD as dom0 does not support multiple CPUs.  This will  Note that NetBSD as dom0 does not support multiple CPUs.  This will
 limit the performance of the Xen/dom0 workstation approach.  limit the performance of the Xen/dom0 workstation approach.  In theory
   the only issue is that the "backend drivers" are not yet MPSAFE:
     http://mail-index.netbsd.org/netbsd-users/2014/08/29/msg015195.html
   
 Installation of NetBSD  Installation of NetBSD
 ----------------------  ----------------------
Line 260  For debugging, one may copy xen-debug.gz Line 274  For debugging, one may copy xen-debug.gz
 to DIAGNOSTIC and DEBUG in NetBSD.  xen-debug.gz is basically only  to DIAGNOSTIC and DEBUG in NetBSD.  xen-debug.gz is basically only
 useful with a serial console.  Then, place a NetBSD XEN3_DOM0 kernel  useful with a serial console.  Then, place a NetBSD XEN3_DOM0 kernel
 in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz  in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz
 of a NetBSD build.  Both xen and NetBSD may be left compressed.  (If  of a NetBSD build.  If using i386, use
 using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.)  releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.  (If using Xen
   3.1 and i386, you may use XEN3_DOM0 with the non-PAE Xen.  But you
   should not use Xen 3.1.)  Both xen and the NetBSD kernel may be (and
   typically are) left compressed.
   
 In a dom0 kernel, kernfs is mandatory for xend to comunicate with the  In a dom0 kernel, kernfs is mandatory for xend to comunicate with the
 kernel, so ensure that /kern is in fstab.  kernel, so ensure that /kern is in fstab.  TODO: Say this is default,
   or file a PR and give a reference.
   
 Because you already installed NetBSD, you have a working boot setup  Because you already installed NetBSD, you have a working boot setup
 with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the  with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the
Line 276  See boot.cfg(5) for an example.  The bas Line 294  See boot.cfg(5) for an example.  The bas
         menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M          menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M
   
 which specifies that the dom0 should have 256M, leaving the rest to be  which specifies that the dom0 should have 256M, leaving the rest to be
 allocated for domUs.  In an attempt to add performance, one can also  allocated for domUs.  To use In an attempt to add performance, one can
 add  also add
   
         dom0_max_vcpus=1 dom0_vcpus_pin          dom0_max_vcpus=1 dom0_vcpus_pin
   
Line 288  As with non-Xen systems, you should have Line 306  As with non-Xen systems, you should have
 kernel that works without Xen) and fallback versions of the non-Xen  kernel that works without Xen) and fallback versions of the non-Xen
 kernel, Xen, and the dom0 kernel.  kernel, Xen, and the dom0 kernel.
   
   Using grub (historic)
   ---------------------
   
   Before NetBSD's native bootloader could support Xen, the use of
   grub was recommended.  If necessary, see the
   [old grub information](/ports/xen/howto-grub/).
   
 The [HowTo on Installing into  The [HowTo on Installing into
 RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html)  RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html)
 explains how to set up booting a dom0 with Xen using grub with  explains how to set up booting a dom0 with Xen using grub with
Line 297  boot.) Line 322  boot.)
 Configuring Xen  Configuring Xen
 ---------------  ---------------
   
   Xen logs will be in /var/log/xen.
   
 Now, you have a system that will boot Xen and the dom0 kernel, and  Now, you have a system that will boot Xen and the dom0 kernel, and
 just run the dom0 kernel.  There will be no domUs, and none can be  just run the dom0 kernel.  There will be no domUs, and none can be
 started because you still have to configure the dom0 tools.  The  started because you still have to configure the dom0 tools.  The
Line 320  installed 4.1 or 4.2): Line 347  installed 4.1 or 4.2):
   
 For 4.1 (and thus xm; xl is believed not to work well), add to rc.conf:  For 4.1 (and thus xm; xl is believed not to work well), add to rc.conf:
   
         xend=YES  
         xencommons=YES          xencommons=YES
           xend=YES
   
 TODO: Explain why if xm is preferred on 4.1, rc.d/xendomains has xl.  (If you are using xentools41 from before 2014-12-26, change
 Or fix the package.  rc.d/xendomains to use xm rather than xl.)
   
 For 4.2 with xm, add to rc.conf  For 4.2 with xm, add to rc.conf
   
         xend=YES  
         xencommons=YES          xencommons=YES
           xend=YES
   
 For 4.2 with xl (preferred), add to rc.conf:  For 4.2 with xl (preferred), add to rc.conf:
   
         TODO: explain if there is a xend replacement  
         xencommons=YES          xencommons=YES
           TODO: explain if there is a xend replacement
   
 TODO: Recommend for/against xen-watchdog.  TODO: Recommend for/against xen-watchdog.
   
 After you have configured the daemons and either started them or  After you have configured the daemons and either started them (in the
 rebooted, run the following (or use xl) to inspect Xen's boot  order given) or rebooted, run the following (or use xl) to inspect
 messages, available resources, and running domains:  Xen's boot messages, available resources, and running domains:
   
         # xm dmesg          # xm dmesg
         [xen's boot info]          [xen's boot info]
Line 391  and adjusts /etc. Line 418  and adjusts /etc.
 Note that one must update both the non-Xen kernel typically used for  Note that one must update both the non-Xen kernel typically used for
 rescue purposes and the DOM0 kernel used with Xen.  rescue purposes and the DOM0 kernel used with Xen.
   
 To convert from grub to /boot, install an mbr bootblock with fdisk,  Converting from grub to /boot
 bootxx_ with installboot, /boot and /boot.cfg.  This really should be  -----------------------------
 no different than completely reinstalling boot blocks on a non-Xen  
 system.  These instructions were [TODO: will be] used to convert a system from
   grub to /boot.  The system was originally installed in February of
   2006 with a RAID1 setup and grub to boot Xen 2, and has been updated
   over time.  Before these commands, it was running NetBSD 6 i386, Xen
   4.1 and grub, much like the message linked earlier in the grub
   section.
   
           # Install mbr bootblocks on both disks. 
           fdisk -i /dev/rwd0d
           fdisk -i /dev/rwd1d
           # Install NetBSD primary boot loader (/ is FFSv1) into RAID1 components.
           installboot -v /dev/rwd0d /usr/mdec/bootxx_ffsv1
           installboot -v /dev/rwd1d /usr/mdec/bootxx_ffsv1
           # Install secondary boot loader
           cp -p /usr/mdec/boot /
           # Create boog.cfg following earlier guidance:
           menu=Xen:load /netbsd-XEN3PAE_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M
           menu=Xen.ok:load /netbsd-XEN3PAE_DOM0.ok.gz console=pc;multiboot /xen.ok.gz dom0_mem=256M
           menu=GENERIC:boot
           menu=GENERIC single-user:boot -s
           menu=GENERIC.ok:boot netbsd.ok
           menu=GENERIC.ok single-user:boot netbsd.ok -s
           menu=Drop to boot prompt:prompt
           default=1
           timeout=30
   
   TODO: actually do this and fix it if necessary.
   
 Updating Xen versions  Updating Xen versions
 ---------------------  ---------------------
Line 415  Unprivileged domains (domU) Line 468  Unprivileged domains (domU)
 This section describes general concepts about domUs.  It does not  This section describes general concepts about domUs.  It does not
 address specific domU operating systems or how to install them.  The  address specific domU operating systems or how to install them.  The
 config files for domUs are typically in /usr/pkg/etc/xen, and are  config files for domUs are typically in /usr/pkg/etc/xen, and are
 typically named so that the file anme, domU name and the domU's host  typically named so that the file name, domU name and the domU's host
 name match.  name match.
   
 The domU is provided with cpu and memory by Xen, configured by the  The domU is provided with cpu and memory by Xen, configured by the
Line 489  anyplace, reasonable places to store dom Line 542  anyplace, reasonable places to store dom
 (so they are near the dom0 kernel), in /usr/pkg/etc/xen (near the  (so they are near the dom0 kernel), in /usr/pkg/etc/xen (near the
 config files), or in /u0/xen (where the vdisks are).  config files), or in /u0/xen (where the vdisks are).
   
   Note that loading the domU kernel from the dom0 implies that boot
   blocks, /boot, /boot.cfg, and so on are all ignored in the domU.
 See the VPS section near the end for discussion of alternate ways to  See the VPS section near the end for discussion of alternate ways to
 obtain domU kernels.  obtain domU kernels.
   
Line 541  are given a device name to associate wit Line 596  are given a device name to associate wit
 "hda1" or "sda1" are common.  In a NetBSD domU, the first disk appears  "hda1" or "sda1" are common.  In a NetBSD domU, the first disk appears
 as xbd0, the second as xbd1, and so on.  However, xm/xl demand a  as xbd0, the second as xbd1, and so on.  However, xm/xl demand a
 second argument.  The name given is converted to a major/minor by  second argument.  The name given is converted to a major/minor by
 consulting /dev and this is passed to the domU (TODO: check this).  In  calling stat(2) on the name in /dev and this is passed to the domU.
 the general case, the dom0 and domU can be different operating  In the general case, the dom0 and domU can be different operating
 systems, and it is an unwarranted assumption that they have consistent  systems, and it is an unwarranted assumption that they have consistent
 numbering in /dev, or even that the dom0 OS has a /dev.  With NetBSD  numbering in /dev, or even that the dom0 OS has a /dev.  With NetBSD
 as both dom0 and domU, using values of 0x0 for the first disk and 0x1  as both dom0 and domU, using values of 0x0 for the first disk and 0x1
 for the second works fine and avoids this issue.  for the second works fine and avoids this issue.  For a GNU/Linux
   guest, one can create /dev/hda1 in /dev, or to pass 0x301 for
   /dev/hda1.
   
 The third element is "w" for writable disks, and "r" for read-only  The third element is "w" for writable disks, and "r" for read-only
 disks.  disks.
Line 578  With NAT, the domU perceives itself to b Line 635  With NAT, the domU perceives itself to b
 dom0.  This is often appropriate when running Xen on a workstation.  dom0.  This is often appropriate when running Xen on a workstation.
 TODO: NAT appears to be configured by "vif = [ '' ]".  TODO: NAT appears to be configured by "vif = [ '' ]".
   
   The MAC address specified is the one used for the interface in the new
   domain.  The interface in dom0 will use this address XOR'd with
   00:00:00:01:00:00.  Random MAC addresses are assigned if not given.
   
 Sizing domains  Sizing domains
 --------------  --------------
   
Line 605  Creating specific unprivileged domains ( Line 666  Creating specific unprivileged domains (
 =============================================  =============================================
   
 Creating domUs is almost entirely independent of operating system.  We  Creating domUs is almost entirely independent of operating system.  We
 first explain NetBSD, and then differences for Linux and Solaris.  have already presented the basics of config files.  Note that you must
 Note that you must have already completed the dom0 setup so that "xm  have already completed the dom0 setup so that "xl list" (or "xm list")
 list" (or "xl list") works.  works.
   
 Creating an unprivileged NetBSD domain (domU)  Creating an unprivileged NetBSD domain (domU)
 ---------------------------------------------  ---------------------------------------------
   
 'xm create' allows you to create a new domain. It uses a config file in  See the earlier config file, and adjust memory.  Decide on how much
 PKG\_SYSCONFDIR for its parameters. By default, this file will be in  storage you will provide, and prepare it (file or lvm).
 `/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which  
 will be executed in the new domain (this kernel is in the *domain0* file  
 system, not on the new domain virtual disk; but please note, you should  
 install the same kernel into *domainU* as `/netbsd` in order to make  
 your system tools, like savecore(8), work). A suitable kernel is  
 provided as part of the i386 and amd64 binary sets: XEN3\_DOMU.  
   
 Here is an /usr/pkg/etc/xen/nbsd example config file:  
   
     #  -*- mode: python; -*-  
     #============================================================================  
     # Python defaults setup for 'xm create'.  
     # Edit this file to reflect the configuration of your system.  
     #============================================================================  
   
     #----------------------------------------------------------------------------  
     # Kernel image file. This kernel will be loaded in the new domain.  
     kernel = "/home/bouyer/netbsd-XEN3_DOMU"  
     #kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"  
   
     # Memory allocation (in megabytes) for the new domain.  
     memory = 128  
   
     # A handy name for your new domain. This will appear in 'xm list',  
     # and you can use this as parameters for xm in place of the domain  
     # number. All domains must have different names.  
     #  
     name = "nbsd"  
   
     # The number of virtual CPUs this domain has.  
     #  
     vcpus = 1  
   
     #----------------------------------------------------------------------------  
     # Define network interfaces for the new domain.  
   
     # Number of network interfaces (must be at least 1). Default is 1.  
     nics = 1  
   
     # Define MAC and/or bridge for the network interfaces.  
     #  
     # The MAC address specified in ``mac'' is the one used for the interface  
     # in the new domain. The interface in domain0 will use this address XOR'd  
     # with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random  
     # MACs are assigned if not given.  
     #  
     # ``bridge'' is a required parameter, which will be passed to the  
     # vif-script called by xend(8) when a new domain is created to configure  
     # the new xvif interface in domain0.  
     #  
     # In this example, the xvif is added to bridge0, which should have been  
     # set up prior to the new domain being created -- either in the  
     # ``network'' script or using a /etc/ifconfig.bridge0 file.  
     #  
     vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ]  
   
     #----------------------------------------------------------------------------  
     # Define the disk devices you want the domain to have access to, and  
     # what you want them accessible as.  
     #  
     # Each disk entry is of the form:  
     #  
     #   phy:DEV,VDEV,MODE  
     #  
     # where DEV is the device, VDEV is the device name the domain will see,  
     # and MODE is r for read-only, w for read-write.  You can also create  
     # file-backed domains using disk entries of the form:  
     #  
     #   file:PATH,VDEV,MODE  
     #  
     # where PATH is the path to the file used as the virtual disk, and VDEV  
     # and MODE have the same meaning as for ``phy'' devices.  
     #  
     # VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index),  
     # but it does for Linux.  
     # Worse, the device has to exist in /dev/ of domain0, because xm will  
     # try to stat() it. This means that in order to load a Linux guest OS  
     # from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ...  
     # on domain0, with the major/minor from Linux :(  
     # Alternatively it's possible to specify the device number in hex,  
     # e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ...  
   
     disk = [ 'phy:/dev/wd0e,0x1,w' ]  While the kernel will be obtained from the dom0 filesystem, the same
     #disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ]  file should be present in the domU as /netbsd so that tools like
     #disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ]  savecore(8) can work.   (This is helpful but not necessary.)
   
   The kernel must be specifically for Xen and for use as a domU.  The
   i386 and amd64 provide the following kernels:
   
           i386 XEN3_DOMU
           i386 XEN3PAE_DOMU
           amd64 XEN3_DOMU
   
     #----------------------------------------------------------------------------  Unless using Xen 3.1 (and you shouldn't) with i386-mode Xen, you must
     # Set the kernel command line for the new domain.  use the PAE version of the i386 kernel.
   
     # Set root device. This one does matter for NetBSD  This will boot NetBSD, but this is not that useful if the disk is
     root = "xbd0"  empty.  One approach is to unpack sets onto the disk outside of xen
     # extra parameters passed to the kernel  (by mounting it, just as you would prepare a physical disk for a
     # this is where you can set boot flags like -s, -a, etc ...  system you can't run the installer on).
     #extra = ""  
   
     #----------------------------------------------------------------------------  
     # Set according to whether you want the domain restarted when it exits.  
     # The default is False.  
     #autorestart = True  
   
     # end of nbsd config file ====================================================  
   
 When a new domain is created, xen calls the  
 `/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface  
 created in *domain0*. This can be used to automatically configure the  
 xvif?.? interfaces in *domain0*. In our example, these will be bridged  
 with the bridge0 device in *domain0*, but the bridge has to exist first.  
 To do this, create the file `/etc/ifconfig.bridge0` and make it look  
 like this:  
   
     create  
     !brconfig $int add ex0 up  
   
 (replace `ex0` with the name of your physical interface). Then bridge0  
 will be created on boot. See the bridge(4) man page for details.  
   
 So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a  
 working vif-bridge is also provided with xentools20) configuring:  
   
     #!/bin/sh  
     #============================================================================  
     # $NetBSD: howto.mdwn,v 1.47 2014/12/26 18:35:45 gdt Exp $  
     #  
     # /usr/pkg/etc/xen/vif-bridge  
     #  
     # Script for configuring a vif in bridged mode with a dom0 interface.  
     # The xend(8) daemon calls a vif script when bringing a vif up or down.  
     # The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp  
     # in the ``vif-script'' field.  
     #  
     # Usage: vif-bridge up|down [var=value ...]  
     #  
     # Actions:  
     #    up     Adds the vif interface to the bridge.  
     #    down   Removes the vif interface from the bridge.  
     #  
     # Variables:  
     #    domain name of the domain the interface is on (required).  
     #    vifq   vif interface name (required).  
     #    mac    vif MAC address (required).  
     #    bridge bridge to add the vif to (required).  
     #  
     # Example invocation:  
     #  
     # vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0  
     #  
     #============================================================================  
   
     # Exit if anything goes wrong  
     set -e  
   
     echo "vif-bridge $*"  
   
     # Operation name.  
     OP=$1; shift  
   
     # Pull variables in args into environment  
     for arg ; do export "${arg}" ; done  
   
     # Required parameters. Fail if not set.  
     domain=${domain:?}  
     vif=${vif:?}  
     mac=${mac:?}  
     bridge=${bridge:?}  
   
     # Optional parameters. Set defaults.  
     ip=${ip:-''}   # default to null (do nothing)  
   
     # Are we going up or down?  
     case $OP in  
     up) brcmd='add' ;;  
     down)   brcmd='delete' ;;  
     *)  
         echo 'Invalid command: ' $OP  
         echo 'Valid commands are: up, down'  
         exit 1  
         ;;  
     esac  
   
     # Don't do anything if the bridge is "null".  
     if [ "${bridge}" = "null" ] ; then  
         exit  
     fi  
   
     # Don't do anything if the bridge doesn't exist.  
     if ! ifconfig -l | grep "${bridge}" >/dev/null; then  
         exit  
     fi  
   
     # Add/remove vif to/from bridge.  
     ifconfig x${vif} $OP  
     brconfig ${bridge} ${brcmd} x${vif}  
   
 Now, running  
   
     xm create -c /usr/pkg/etc/xen/nbsd  
   
 should create a domain and load a NetBSD kernel in it. (Note: `-c`  
 causes xm to connect to the domain's console once created.) The kernel  
 will try to find its root file system on xbd0 (i.e., wd0e) which hasn't  
 been created yet. wd0e will be seen as a disk device in the new domain,  
 so it will be 'sub-partitioned'. We could attach a ccd to wd0e in  
 *domain0* and partition it, newfs and extract the NetBSD/i386 or amd64  
 tarballs there, but there's an easier way: load the  
 `netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets.  
 Like other install kernels, it contains a ramdisk with sysinst, so you  
 can install NetBSD using sysinst on your new domain.  
   
 If you want to install NetBSD/Xen with a CDROM image, the following line  A second approach is to run an INSTALL kernel, which has a miniroot
 should be used in the `/usr/pkg/etc/xen/nbsd` file:  and can load sets from the network.  To do this, copy the INSTALL
   kernel to / and change the kernel line in the config file to:
   
           kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"
   
   Then, start the domain as "xl create -c configname".
   
   Alternatively, if you want to install NetBSD/Xen with a CDROM image, the following
   line should be used in the config file.
   
     disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]      disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]
   
 After booting the domain, the option to install via CDROM may be  After booting the domain, the option to install via CDROM may be
 selected. The CDROM device should be changed to `xbd1d`.  selected.  The CDROM device should be changed to `xbd1d`.
   
 Once done installing, `halt -p` the new domain (don't reboot or halt, it  Once done installing, "halt -p" the new domain (don't reboot or halt,
 would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the  it would reload the INSTALL_XEN3_DOMU kernel even if you changed the
 config file), switch the config file back to the XEN3\_DOMU kernel, and  config file), switch the config file back to the XEN3_DOMU kernel,
 start the new domain again. Now it should be able to use `root on xbd0a`  and start the new domain again. Now it should be able to use "root on
 and you should have a second, functional NetBSD system on your xen  xbd0a" and you should have a, functional NetBSD domU.
 installation.  
   
   TODO: check if this is still accurate.
 When the new domain is booting you'll see some warnings about *wscons*  When the new domain is booting you'll see some warnings about *wscons*
 and the pseudo-terminals. These can be fixed by editing the files  and the pseudo-terminals. These can be fixed by editing the files
 `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in  `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in
Line 852  Finally, all screens must be commented o Line 733  Finally, all screens must be commented o
   
 It is also desirable to add  It is also desirable to add
   
     powerd=YES          powerd=YES
   
 in rc.conf. This way, the domain will be properly shut down if  in rc.conf. This way, the domain will be properly shut down if
 `xm shutdown -R` or `xm shutdown -H` is used on the domain0.  `xm shutdown -R` or `xm shutdown -H` is used on the dom0.
   
 Your domain should be now ready to work, enjoy.  Your domain should be now ready to work, enjoy.
   
Line 871  the example below) Line 752  the example below)
     disk = [ 'phy:/dev/wd0e,0x1,w' ]      disk = [ 'phy:/dev/wd0e,0x1,w' ]
   
 does matter to Linux. It wants a Linux device number here (e.g. 0x300  does matter to Linux. It wants a Linux device number here (e.g. 0x300
 for hda). Linux builds device numbers as: (major \<\< 8 + minor). So,  for hda).  Linux builds device numbers as: (major \<\< 8 + minor).
 hda1 which has major 3 and minor 1 on a Linux system will have device  So, hda1 which has major 3 and minor 1 on a Linux system will have
 number 0x301. Alternatively, devices names can be used (hda, hdb, ...)  device number 0x301.  Alternatively, devices names can be used (hda,
 as xentools has a table to map these names to devices numbers. To export  hdb, ...)  as xentools has a table to map these names to devices
 a partition to a Linux guest we can use:  numbers.  To export a partition to a Linux guest we can use:
   
     disk = [ 'phy:/dev/wd0e,0x300,w' ]          disk = [ 'phy:/dev/wd0e,0x300,w' ]
     root = "/dev/hda1 ro"          root = "/dev/hda1 ro"
   
 and it will appear as /dev/hda on the Linux system, and be used as root  and it will appear as /dev/hda on the Linux system, and be used as root
 partition.  partition.
   
 To install the Linux system on the partition to be exported to the guest  To install the Linux system on the partition to be exported to the
 domain, the following method can be used: install sysutils/e2fsprogs  guest domain, the following method can be used: install
 from pkgsrc. Use mke2fs to format the partition that will be the root  sysutils/e2fsprogs from pkgsrc.  Use mke2fs to format the partition
 partition of your Linux domain, and mount it. Then copy the files from a  that will be the root partition of your Linux domain, and mount it.
 working Linux system, make adjustments in `/etc` (fstab, network  Then copy the files from a working Linux system, make adjustments in
 config). It should also be possible to extract binary packages such as  `/etc` (fstab, network config).  It should also be possible to extract
 .rpm or .deb directly to the mounted partition using the appropriate  binary packages such as .rpm or .deb directly to the mounted partition
 tool, possibly running under NetBSD's Linux emulation. Once the  using the appropriate tool, possibly running under NetBSD's Linux
 filesystem has been populated, umount it. If desirable, the filesystem  emulation.  Once the filesystem has been populated, umount it.  If
 can be converted to ext3 using tune2fs -j. It should now be possible to  desirable, the filesystem can be converted to ext3 using tune2fs -j.
 boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels  It should now be possible to boot the Linux guest domain, using one of
 available in the Xen binary distribution.  the vmlinuz-\*-xenU kernels available in the Xen binary distribution.
   
 To get the linux console right, you need to add:  To get the linux console right, you need to add:
   
Line 906  tty to the xen console. Line 787  tty to the xen console.
 Creating an unprivileged Solaris domain (domU)  Creating an unprivileged Solaris domain (domU)
 ----------------------------------------------  ----------------------------------------------
   
 Download an Opensolaris [release](http://opensolaris.org/os/downloads/)  See possibly outdated
 or [development snapshot](http://genunix.org/) DVD image. Attach the DVD  [Solaris domU instructions](/ports/xen/howto-solaris/).
 image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem  
 image to your dom0 filesystem.  
   
     dom0# mkdir /root/solaris  
     dom0# vnconfig vnd0 osol-1002-124-x86.iso  
     dom0# mount /dev/vnd0a /mnt  
   
     ## for a 64-bit guest  
     dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris  
     dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris  
   
     ## for a 32-bit guest  
     dom0# cp /mnt/boot/x86.microroot /root/solaris  
     dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris  
   
     dom0# umount /mnt  
             
   
 Keep the MAN.VND.4 configured. For some reason the boot process stalls  
 unless the DVD image is attached to the guest as a "phy" device. Create  
 an initial configuration file with the following contents. Substitute  
 */dev/wd0k* with an empty partition at least 8 GB large.  
   
     memory = 640  
     name = 'solaris'  
     disk = [ 'phy:/dev/wd0k,0,w' ]  
     disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ]  
     vif = [ 'bridge=bridge0' ]  
     kernel = '/root/solaris/unix'  
     ramdisk = '/root/solaris/x86.microroot'  
     # for a 64-bit guest  
     extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom'  
     # for a 32-bit guest  
     #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom'  
             
   
 Start the guest.  
   
     dom0# xm create -c solaris.cfg  
     Started domain solaris  
                           v3.3.2 chgset 'unavailable'  
     SunOS Release 5.11 Version snv_124 64-bit  
     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.  
     Use is subject to license terms.  
     Hostname: opensolaris  
     Remounting root read/write  
     Probing for device nodes ...  
     WARNING: emlxs: ddi_modopen drv/fct failed: err 2  
     Preparing live image for use  
     Done mounting Live image  
             
   
 Make sure the network is configured. Note that it can take a minute for  
 the xnf0 interface to appear.  
   
     opensolaris console login: jack  
     Password: jack  
     Sun Microsystems Inc.   SunOS 5.11      snv_124 November 2008  
     jack@opensolaris:~$ pfexec sh  
     sh-3.2# ifconfig -a  
     sh-3.2# exit  
             
   
 Set a password for VNC and start the VNC server which provides the X11  
 display where the installation program runs.  
   
     jack@opensolaris:~$ vncpasswd  
     Password: solaris  
     Verify: solaris  
     jack@opensolaris:~$ cp .Xclients .vnc/xstartup  
     jack@opensolaris:~$ vncserver :1  
             
   
 From a remote machine connect to the VNC server. Use `ifconfig xnf0` on  
 the guest to find the correct IP address to use.  
   
     remote$ vncviewer 172.18.2.99:1  
             
   
 It is also possible to launch the installation on a remote X11 display.  
   
     jack@opensolaris:~$ export DISPLAY=172.18.1.1:0  
     jack@opensolaris:~$ pfexec gui-install  
              
   
 After the GUI installation is complete you will be asked to reboot.  
 Before that you need to determine the ZFS ID for the new boot filesystem  
 and update the configuration file accordingly. Return to the guest  
 console.  
   
     jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs  
                     bootfs = 43  
     ^C  
     jack@opensolaris:~$  
              
   
 The final configuration file should look like this. Note in particular  
 the last line.  
   
     memory = 640  
     name = 'solaris'  
     disk = [ 'phy:/dev/wd0k,0,w' ]  
     vif = [ 'bridge=bridge0' ]  
     kernel = '/root/solaris/unix'  
     ramdisk = '/root/solaris/x86.microroot'  
     extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"'  
              
   
 Restart the guest to verify it works correctly.  
   
     dom0# xm destroy solaris  
     dom0# xm create -c solaris.cfg  
     Using config file "./solaris.cfg".  
     v3.3.2 chgset 'unavailable'  
     Started domain solaris  
     SunOS Release 5.11 Version snv_124 64-bit  
     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.  
     Use is subject to license terms.  
     WARNING: emlxs: ddi_modopen drv/fct failed: err 2  
     Hostname: osol  
     Configuring devices.  
     Loading smf(5) service descriptions: 160/160  
     svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .  
     Reading ZFS config: done.  
     Mounting ZFS filesystems: (6/6)  
     Creating new rsa public/private host key pair  
     Creating new dsa public/private host key pair  
   
     osol console login:  
              
   
 Using PCI devices in guest domains  
 ----------------------------------  
   
 The domain0 can give other domains access to selected PCI devices. This  
 can allow, for example, a non-privileged domain to have access to a  
 physical network interface or disk controller. However, keep in mind  
 that giving a domain access to a PCI device most likely will give the  
 domain read/write access to the whole physical memory, as PCs don't have  
 an IOMMU to restrict memory access to DMA-capable device. Also, it's not  
 possible to export ISA devices to non-domain0 domains (which means that  
 the primary VGA adapter can't be exported. A guest domain trying to  
 access the VGA registers will panic).  
   
 This functionality is only available in NetBSD-5.1 (and later) domain0  
 and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as  
 support has not been ported to later versions at this time.  
   
 For a PCI device to be exported to a domU, is has to be attached to the  
 `pciback` driver in domain0. Devices passed to the domain0 via the  
 pciback.hide boot parameter will attach to `pciback` instead of the  
 usual driver. The list of devices is specified as `(bus:dev.func)`,  
 where bus and dev are 2-digit hexadecimal numbers, and func a  
 single-digit number:  
   
     pciback.hide=(00:0a.0)(00:06.0)  
   
 pciback devices should show up in the domain0's boot messages, and the  
 devices should be listed in the `/kern/xen/pci` directory.  
   
 PCI devices to be exported to a domU are listed in the `pci` array of  
 the domU's config file, with the format `'0000:bus:dev.func'`  
   
     pci = [ '0000:00:06.0', '0000:00:0a.0' ]  
   
 In the domU an `xpci` device will show up, to which one or more pci  
 busses will attach. Then the PCI drivers will attach to PCI busses as  
 usual. Note that the default NetBSD DOMU kernels do not have `xpci` or  
 any PCI drivers built in by default; you have to build your own kernel  
 to use PCI devices in a domU. Here's a kernel config example:  
   
     include         "arch/i386/conf/XEN3_DOMU"  
     #include         "arch/i386/conf/XENU"           # in NetBSD 3.0  
   
     # Add support for PCI busses to the XEN3_DOMU kernel  PCI passthrough: Using PCI devices in guest domains
     xpci* at xenbus ?  ---------------------------------------------------
     pci* at xpci ?  
   
     # Now add PCI and related devices to be used by this domain  The dom0 can give other domains access to selected PCI
     # USB Controller and Devices  devices. This can allow, for example, a non-privileged domain to have
   access to a physical network interface or disk controller.  However,
     # PCI USB controllers  keep in mind that giving a domain access to a PCI device most likely
     uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)  will give the domain read/write access to the whole physical memory,
   as PCs don't have an IOMMU to restrict memory access to DMA-capable
   device.  Also, it's not possible to export ISA devices to non-dom0
   domains, which means that the primary VGA adapter can't be exported.
   A guest domain trying to access the VGA registers will panic.
   
   If the dom0 is NetBSD, it has to be running Xen 3.1, as support has
   not been ported to later versions at this time.
   
   For a PCI device to be exported to a domU, is has to be attached to
   the "pciback" driver in dom0.  Devices passed to the dom0 via the
   pciback.hide boot parameter will attach to "pciback" instead of the
   usual driver.  The list of devices is specified as "(bus:dev.func)",
   where bus and dev are 2-digit hexadecimal numbers, and func a
   single-digit number:
   
     # USB bus support          pciback.hide=(00:0a.0)(00:06.0)
     usb*    at uhci?  
   
     # USB Hubs  pciback devices should show up in the dom0's boot messages, and the
     uhub*   at usb?  devices should be listed in the `/kern/xen/pci` directory.
     uhub*   at uhub? port ? configuration ? interface ?  
   
     # USB Mass Storage  PCI devices to be exported to a domU are listed in the "pci" array of
     umass*  at uhub? port ? configuration ? interface ?  the domU's config file, with the format "0000:bus:dev.func".
     wd*     at umass?  
     # SCSI controllers  
     ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI  
   
     # SCSI bus support (for both ahc and umass)          pci = [ '0000:00:06.0', '0000:00:0a.0' ]
     scsibus* at scsi?  
   
     # SCSI devices  In the domU an "xpci" device will show up, to which one or more pci
     sd*     at scsibus? target ? lun ?      # SCSI disk drives  busses will attach.  Then the PCI drivers will attach to PCI busses as
     cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives  usual.  Note that the default NetBSD DOMU kernels do not have "xpci"
   or any PCI drivers built in by default; you have to build your own
   kernel to use PCI devices in a domU.  Here's a kernel config example;
   note that only the "xpci" lines are unusual.
   
           include         "arch/i386/conf/XEN3_DOMU"
   
           # Add support for PCI busses to the XEN3_DOMU kernel
           xpci* at xenbus ?
           pci* at xpci ?
   
           # PCI USB controllers
           uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)
   
           # USB bus support
           usb*    at uhci?
   
           # USB Hubs
           uhub*   at usb?
           uhub*   at uhub? port ? configuration ? interface ?
   
           # USB Mass Storage
           umass*  at uhub? port ? configuration ? interface ?
           wd*     at umass?
           # SCSI controllers
           ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI
   
           # SCSI bus support (for both ahc and umass)
           scsibus* at scsi?
   
           # SCSI devices
           sd*     at scsibus? target ? lun ?      # SCSI disk drives
           cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives
   
   
 NetBSD as a domU in a VPS  NetBSD as a domU in a VPS
Line 1119  NetBSD as a domU in a VPS Line 867  NetBSD as a domU in a VPS
 The bulk of the HOWTO is about using NetBSD as a dom0 on your own  The bulk of the HOWTO is about using NetBSD as a dom0 on your own
 hardware.  This section explains how to deal with Xen in a domU as a  hardware.  This section explains how to deal with Xen in a domU as a
 virtual private server where you do not control or have access to the  virtual private server where you do not control or have access to the
 dom0.  dom0.  This is not intended to be an exhaustive list of VPS providers;
   only a few are mentioned that specifically support NetBSD.
   
   VPS operators provide varying degrees of access and mechanisms for
   configuration.  The big issue is usually how one controls which kernel
   is booted, because the kernel is nominally in the dom0 filesystem (to
   which VPS users do not normally have acesss).  A second issue is how
   to install NetBSD.
   A VPS user may want to compile a kernel for security updates, to run
   npf, run IPsec, or any other reason why someone would want to change
   their kernel.
   
   One approach is to have an adminstrative interface to upload a kernel,
   or to select from a prepopulated list.  Other approaches are pygrub
   (deprecated) and pvgrub, which are ways to have a bootloader obtain a
   kernel from the domU filesystem.  This is closer to a regular physical
   computer, where someone who controls a machine can replace the kernel.
   
   A second issue is multiple CPUs.  With NetBSD 6, domUs support
   multiple vcpus, and it is typical for VPS providers to enable multiple
   CPUs for NetBSD domUs.
   
   pygrub
   -------
   
 TODO: Perhaps reference panix, prmgr, amazon as interesting examples.  pygrub runs in the dom0 and looks into the domU filesystem.  This
   implies that the domU must have a kernel in a filesystem in a format
   known to pygrub.  As of 2014, pygrub seems to be of mostly historical
   interest.
   
 TODO: Somewhere, discuss pvgrub and py-grub to load the domU kernel  pvgrub
 from the domU filesystem.  ------
   
   pvgrub is a version of grub that uses PV operations instead of BIOS
   calls.  It is booted from the dom0 as the domU kernel, and then reads
   /grub/menu.lst and loads a kernel from the domU filesystem.
   
   [Panix](http://www.panix.com/) lets users use pvgrub.  Panix reports
   that pvgrub works with FFsv2 with 16K/2K and 32K/4K block/frag sizes
   (and hence with defaults from "newfs -O 2").  See [Panix's pvgrub
   page](http://www.panix.com/v-colo/grub.html), which describes only
   Linux but should be updated to cover NetBSD :-).
   
   [prgmr.com](http://prgmr.com/) also lets users with pvgrub to boot
   their own kernel.  See then [prgmr.com NetBSD
   HOWTO](http://wiki.prgmr.com/mediawiki/index.php/NetBSD_as_a_DomU)
   (which is in need of updating).
   
   It appears that [grub's FFS
   code](http://xenbits.xensource.com/hg/xen-unstable.hg/file/bca284f67702/tools/libfsimage/ufs/fsys_ufs.c)
   does not support all aspects of modern FFS, but there are also reports
   that FFSv2 works fine.  At prgmr, typically one has an ext2 or FAT
   partition for the kernel with the intent that grub can understand it,
   which leads to /netbsd not being the actual kernel.  One must remember
   to update the special boot partiion.
   
   Amazon
   ------
   
   TODO: add link to NetBSD amazon howto.
   
 Using npf  Using npf
 ---------  ---------
Line 1132  Using npf Line 934  Using npf
 In standard kernels, npf is a module, and thus cannot be loadeed in a  In standard kernels, npf is a module, and thus cannot be loadeed in a
 DOMU kernel.  DOMU kernel.
   
 TODO: explain how to compile npf into a custom kernel, answering:  TODO: explain how to compile npf into a custom kernel, answering (but
   note that the problem was caused by not booting the right kernel):
 http://mail-index.netbsd.org/netbsd-users/2014/12/26/msg015576.html  http://mail-index.netbsd.org/netbsd-users/2014/12/26/msg015576.html
   
   TODO items for improving NetBSD/xen
   ===================================
   
   * Package Xen 4.4.
   * Get PCI passthrough working on Xen 4.2 (or 4.4).
   * Get pvgrub into pkgsrc, either via xentools or separately.
   * grub
     * Check/add support to pkgsrc grub2 for UFS2 and arbitrary
       fragsize/blocksize (UFS2 support may be present; the point is to
       make it so that with any UFS1/UFS2 filesystem setup that works
       with NetBSD grub will also work).
       See [pkg/40258](http://gnats.netbsd.org/40258).
     * Push patches upstream.
     * Get UFS2 patches into pvgrub.
   * Add support for PV ops to a version of /boot, and make it usable as
     a kernel in Xen, similar to pvgrub.

Removed from v.1.48  
changed lines
  Added in v.1.75


CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb