--- wikisrc/ports/xen/howto.mdwn 2014/12/26 20:00:44 1.48 +++ wikisrc/ports/xen/howto.mdwn 2014/12/26 23:58:18 1.53 @@ -204,7 +204,9 @@ alternately with little problems, simply Xen daemons when not running Xen. Note that NetBSD as dom0 does not support multiple CPUs. This will -limit the performance of the Xen/dom0 workstation approach. +limit the performance of the Xen/dom0 workstation approach. In theory +the only issue is that the "backend drivers" are not yet MPSAFE: + http://mail-index.netbsd.org/netbsd-users/2014/08/29/msg015195.html Installation of NetBSD ---------------------- @@ -263,8 +265,20 @@ in /, copied from releasedir/amd64/binar of a NetBSD build. Both xen and NetBSD may be left compressed. (If using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.) -In a dom0 kernel, kernfs is mandatory for xend to comunicate with the -kernel, so ensure that /kern is in fstab. +With Xen as the kernel, you must provide a dom0 NetBSD kernel to be +used as a module; place this in /. Suitable kernels are provided in +releasedir/binary/kernel: + + i386 XEN3_DOM0 + i386 XEN3PAE_DOM0 + amd64 XEN3_DOM0 + +The first one is only for use with Xen 3.1 and i386-mode Xen (and you +should not do this). Current Xen always uses PAE on i386, but you +should generally use amd64 for the dom0. In a dom0 kernel, kernfs is +mandatory for xend to comunicate with the kernel, so ensure that /kern +is in fstab. TODO: Say this is default, or file a PR and give a +reference. Because you already installed NetBSD, you have a working boot setup with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the @@ -297,6 +311,8 @@ boot.) Configuring Xen --------------- +Xen logs will be in /var/log/xen. + Now, you have a system that will boot Xen and the dom0 kernel, and just run the dom0 kernel. There will be no domUs, and none can be started because you still have to configure the dom0 tools. The @@ -320,27 +336,27 @@ installed 4.1 or 4.2): For 4.1 (and thus xm; xl is believed not to work well), add to rc.conf: - xend=YES xencommons=YES + xend=YES TODO: Explain why if xm is preferred on 4.1, rc.d/xendomains has xl. Or fix the package. For 4.2 with xm, add to rc.conf - xend=YES xencommons=YES + xend=YES For 4.2 with xl (preferred), add to rc.conf: - TODO: explain if there is a xend replacement xencommons=YES + TODO: explain if there is a xend replacement TODO: Recommend for/against xen-watchdog. -After you have configured the daemons and either started them or -rebooted, run the following (or use xl) to inspect Xen's boot -messages, available resources, and running domains: +After you have configured the daemons and either started them (in the +order given) or rebooted, run the following (or use xl) to inspect +Xen's boot messages, available resources, and running domains: # xm dmesg [xen's boot info] @@ -541,12 +557,14 @@ are given a device name to associate wit "hda1" or "sda1" are common. In a NetBSD domU, the first disk appears as xbd0, the second as xbd1, and so on. However, xm/xl demand a second argument. The name given is converted to a major/minor by -consulting /dev and this is passed to the domU (TODO: check this). In -the general case, the dom0 and domU can be different operating +calling stat(2) on the name in /dev and this is passed to the domU. +In the general case, the dom0 and domU can be different operating systems, and it is an unwarranted assumption that they have consistent numbering in /dev, or even that the dom0 OS has a /dev. With NetBSD as both dom0 and domU, using values of 0x0 for the first disk and 0x1 -for the second works fine and avoids this issue. +for the second works fine and avoids this issue. For a GNU/Linux +guest, one can create /dev/hda1 in /dev, or to pass 0x301 for +/dev/hda1. The third element is "w" for writable disks, and "r" for read-only disks. @@ -578,6 +596,10 @@ With NAT, the domU perceives itself to b dom0. This is often appropriate when running Xen on a workstation. TODO: NAT appears to be configured by "vif = [ '' ]". +The MAC address specified is the one used for the interface in the new +domain. The interface in dom0 will use this address XOR'd with +00:00:00:01:00:00. Random MAC addresses are assigned if not given. + Sizing domains -------------- @@ -605,238 +627,58 @@ Creating specific unprivileged domains ( ============================================= Creating domUs is almost entirely independent of operating system. We -first explain NetBSD, and then differences for Linux and Solaris. -Note that you must have already completed the dom0 setup so that "xm -list" (or "xl list") works. +have already presented the basics of config files. Note that you must +have already completed the dom0 setup so that "xl list" (or "xm list") +works. Creating an unprivileged NetBSD domain (domU) --------------------------------------------- -'xm create' allows you to create a new domain. It uses a config file in -PKG\_SYSCONFDIR for its parameters. By default, this file will be in -`/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which -will be executed in the new domain (this kernel is in the *domain0* file -system, not on the new domain virtual disk; but please note, you should -install the same kernel into *domainU* as `/netbsd` in order to make -your system tools, like savecore(8), work). A suitable kernel is -provided as part of the i386 and amd64 binary sets: XEN3\_DOMU. - -Here is an /usr/pkg/etc/xen/nbsd example config file: - - # -*- mode: python; -*- - #============================================================================ - # Python defaults setup for 'xm create'. - # Edit this file to reflect the configuration of your system. - #============================================================================ - - #---------------------------------------------------------------------------- - # Kernel image file. This kernel will be loaded in the new domain. - kernel = "/home/bouyer/netbsd-XEN3_DOMU" - #kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU" - - # Memory allocation (in megabytes) for the new domain. - memory = 128 - - # A handy name for your new domain. This will appear in 'xm list', - # and you can use this as parameters for xm in place of the domain - # number. All domains must have different names. - # - name = "nbsd" - - # The number of virtual CPUs this domain has. - # - vcpus = 1 - - #---------------------------------------------------------------------------- - # Define network interfaces for the new domain. - - # Number of network interfaces (must be at least 1). Default is 1. - nics = 1 - - # Define MAC and/or bridge for the network interfaces. - # - # The MAC address specified in ``mac'' is the one used for the interface - # in the new domain. The interface in domain0 will use this address XOR'd - # with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random - # MACs are assigned if not given. - # - # ``bridge'' is a required parameter, which will be passed to the - # vif-script called by xend(8) when a new domain is created to configure - # the new xvif interface in domain0. - # - # In this example, the xvif is added to bridge0, which should have been - # set up prior to the new domain being created -- either in the - # ``network'' script or using a /etc/ifconfig.bridge0 file. - # - vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ] - - #---------------------------------------------------------------------------- - # Define the disk devices you want the domain to have access to, and - # what you want them accessible as. - # - # Each disk entry is of the form: - # - # phy:DEV,VDEV,MODE - # - # where DEV is the device, VDEV is the device name the domain will see, - # and MODE is r for read-only, w for read-write. You can also create - # file-backed domains using disk entries of the form: - # - # file:PATH,VDEV,MODE - # - # where PATH is the path to the file used as the virtual disk, and VDEV - # and MODE have the same meaning as for ``phy'' devices. - # - # VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index), - # but it does for Linux. - # Worse, the device has to exist in /dev/ of domain0, because xm will - # try to stat() it. This means that in order to load a Linux guest OS - # from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ... - # on domain0, with the major/minor from Linux :( - # Alternatively it's possible to specify the device number in hex, - # e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ... +See the earlier config file, and adjust memory. Decide on how much +storage you will provide, and prepare it (file or lvm). - disk = [ 'phy:/dev/wd0e,0x1,w' ] - #disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ] - #disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ] +While the kernel will be obtained from the dom0 filesystem, the same +file should be present in the domU as /netbsd so that tools like +savecore(8) can work. (This is helpful but not necessary.) + +The kernel must be specifically for Xen and for use as a domU. The +i386 and amd64 provide the following kernels: + + i386 XEN3_DOMU + i386 XEN3PAE_DOMU + amd64 XEN3_DOMU + +Unless using Xen 3.1 (and you shouldn't) with i386-mode Xen, you must +use the PAE version of the i386 kernel. - #---------------------------------------------------------------------------- - # Set the kernel command line for the new domain. +This will boot NetBSD, but this is not that useful if the disk is +empty. One approach is to unpack sets onto the disk outside of xen +(by mounting it, just as you would prepare a physical disk for a +system you can't run the installer on). - # Set root device. This one does matter for NetBSD - root = "xbd0" - # extra parameters passed to the kernel - # this is where you can set boot flags like -s, -a, etc ... - #extra = "" - - #---------------------------------------------------------------------------- - # Set according to whether you want the domain restarted when it exits. - # The default is False. - #autorestart = True - - # end of nbsd config file ==================================================== - -When a new domain is created, xen calls the -`/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface -created in *domain0*. This can be used to automatically configure the -xvif?.? interfaces in *domain0*. In our example, these will be bridged -with the bridge0 device in *domain0*, but the bridge has to exist first. -To do this, create the file `/etc/ifconfig.bridge0` and make it look -like this: - - create - !brconfig $int add ex0 up - -(replace `ex0` with the name of your physical interface). Then bridge0 -will be created on boot. See the bridge(4) man page for details. - -So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a -working vif-bridge is also provided with xentools20) configuring: - - #!/bin/sh - #============================================================================ - # $NetBSD: howto.mdwn,v 1.47 2014/12/26 18:35:45 gdt Exp $ - # - # /usr/pkg/etc/xen/vif-bridge - # - # Script for configuring a vif in bridged mode with a dom0 interface. - # The xend(8) daemon calls a vif script when bringing a vif up or down. - # The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp - # in the ``vif-script'' field. - # - # Usage: vif-bridge up|down [var=value ...] - # - # Actions: - # up Adds the vif interface to the bridge. - # down Removes the vif interface from the bridge. - # - # Variables: - # domain name of the domain the interface is on (required). - # vifq vif interface name (required). - # mac vif MAC address (required). - # bridge bridge to add the vif to (required). - # - # Example invocation: - # - # vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0 - # - #============================================================================ - - # Exit if anything goes wrong - set -e - - echo "vif-bridge $*" - - # Operation name. - OP=$1; shift - - # Pull variables in args into environment - for arg ; do export "${arg}" ; done - - # Required parameters. Fail if not set. - domain=${domain:?} - vif=${vif:?} - mac=${mac:?} - bridge=${bridge:?} - - # Optional parameters. Set defaults. - ip=${ip:-''} # default to null (do nothing) - - # Are we going up or down? - case $OP in - up) brcmd='add' ;; - down) brcmd='delete' ;; - *) - echo 'Invalid command: ' $OP - echo 'Valid commands are: up, down' - exit 1 - ;; - esac - - # Don't do anything if the bridge is "null". - if [ "${bridge}" = "null" ] ; then - exit - fi - - # Don't do anything if the bridge doesn't exist. - if ! ifconfig -l | grep "${bridge}" >/dev/null; then - exit - fi - - # Add/remove vif to/from bridge. - ifconfig x${vif} $OP - brconfig ${bridge} ${brcmd} x${vif} - -Now, running - - xm create -c /usr/pkg/etc/xen/nbsd - -should create a domain and load a NetBSD kernel in it. (Note: `-c` -causes xm to connect to the domain's console once created.) The kernel -will try to find its root file system on xbd0 (i.e., wd0e) which hasn't -been created yet. wd0e will be seen as a disk device in the new domain, -so it will be 'sub-partitioned'. We could attach a ccd to wd0e in -*domain0* and partition it, newfs and extract the NetBSD/i386 or amd64 -tarballs there, but there's an easier way: load the -`netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets. -Like other install kernels, it contains a ramdisk with sysinst, so you -can install NetBSD using sysinst on your new domain. +A second approach is to run an INSTALL kernel, which has a miniroot +and can load sets from the network. To do this, copy the INSTALL +kernel to / and change the kernel line in the config file to: -If you want to install NetBSD/Xen with a CDROM image, the following line -should be used in the `/usr/pkg/etc/xen/nbsd` file: + kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU" + +Then, start the domain as "xl create -c configname". + +Alternatively, if you want to install NetBSD/Xen with a CDROM image, the following +line should be used in the config file. disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ] After booting the domain, the option to install via CDROM may be -selected. The CDROM device should be changed to `xbd1d`. +selected. The CDROM device should be changed to `xbd1d`. -Once done installing, `halt -p` the new domain (don't reboot or halt, it -would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the -config file), switch the config file back to the XEN3\_DOMU kernel, and -start the new domain again. Now it should be able to use `root on xbd0a` -and you should have a second, functional NetBSD system on your xen -installation. +Once done installing, "halt -p" the new domain (don't reboot or halt, +it would reload the INSTALL_XEN3_DOMU kernel even if you changed the +config file), switch the config file back to the XEN3_DOMU kernel, +and start the new domain again. Now it should be able to use "root on +xbd0a" and you should have a, functional NetBSD domU. +TODO: check if this is still accurate. When the new domain is booting you'll see some warnings about *wscons* and the pseudo-terminals. These can be fixed by editing the files `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in @@ -852,10 +694,10 @@ Finally, all screens must be commented o It is also desirable to add - powerd=YES + powerd=YES in rc.conf. This way, the domain will be properly shut down if -`xm shutdown -R` or `xm shutdown -H` is used on the domain0. +`xm shutdown -R` or `xm shutdown -H` is used on the dom0. Your domain should be now ready to work, enjoy. @@ -871,30 +713,30 @@ the example below) disk = [ 'phy:/dev/wd0e,0x1,w' ] does matter to Linux. It wants a Linux device number here (e.g. 0x300 -for hda). Linux builds device numbers as: (major \<\< 8 + minor). So, -hda1 which has major 3 and minor 1 on a Linux system will have device -number 0x301. Alternatively, devices names can be used (hda, hdb, ...) -as xentools has a table to map these names to devices numbers. To export -a partition to a Linux guest we can use: +for hda). Linux builds device numbers as: (major \<\< 8 + minor). +So, hda1 which has major 3 and minor 1 on a Linux system will have +device number 0x301. Alternatively, devices names can be used (hda, +hdb, ...) as xentools has a table to map these names to devices +numbers. To export a partition to a Linux guest we can use: - disk = [ 'phy:/dev/wd0e,0x300,w' ] - root = "/dev/hda1 ro" + disk = [ 'phy:/dev/wd0e,0x300,w' ] + root = "/dev/hda1 ro" and it will appear as /dev/hda on the Linux system, and be used as root partition. -To install the Linux system on the partition to be exported to the guest -domain, the following method can be used: install sysutils/e2fsprogs -from pkgsrc. Use mke2fs to format the partition that will be the root -partition of your Linux domain, and mount it. Then copy the files from a -working Linux system, make adjustments in `/etc` (fstab, network -config). It should also be possible to extract binary packages such as -.rpm or .deb directly to the mounted partition using the appropriate -tool, possibly running under NetBSD's Linux emulation. Once the -filesystem has been populated, umount it. If desirable, the filesystem -can be converted to ext3 using tune2fs -j. It should now be possible to -boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels -available in the Xen binary distribution. +To install the Linux system on the partition to be exported to the +guest domain, the following method can be used: install +sysutils/e2fsprogs from pkgsrc. Use mke2fs to format the partition +that will be the root partition of your Linux domain, and mount it. +Then copy the files from a working Linux system, make adjustments in +`/etc` (fstab, network config). It should also be possible to extract +binary packages such as .rpm or .deb directly to the mounted partition +using the appropriate tool, possibly running under NetBSD's Linux +emulation. Once the filesystem has been populated, umount it. If +desirable, the filesystem can be converted to ext3 using tune2fs -j. +It should now be possible to boot the Linux guest domain, using one of +the vmlinuz-\*-xenU kernels available in the Xen binary distribution. To get the linux console right, you need to add: @@ -906,211 +748,78 @@ tty to the xen console. Creating an unprivileged Solaris domain (domU) ---------------------------------------------- -Download an Opensolaris [release](http://opensolaris.org/os/downloads/) -or [development snapshot](http://genunix.org/) DVD image. Attach the DVD -image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem -image to your dom0 filesystem. - - dom0# mkdir /root/solaris - dom0# vnconfig vnd0 osol-1002-124-x86.iso - dom0# mount /dev/vnd0a /mnt - - ## for a 64-bit guest - dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris - dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris - - ## for a 32-bit guest - dom0# cp /mnt/boot/x86.microroot /root/solaris - dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris - - dom0# umount /mnt - - -Keep the MAN.VND.4 configured. For some reason the boot process stalls -unless the DVD image is attached to the guest as a "phy" device. Create -an initial configuration file with the following contents. Substitute -*/dev/wd0k* with an empty partition at least 8 GB large. - - memory = 640 - name = 'solaris' - disk = [ 'phy:/dev/wd0k,0,w' ] - disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ] - vif = [ 'bridge=bridge0' ] - kernel = '/root/solaris/unix' - ramdisk = '/root/solaris/x86.microroot' - # for a 64-bit guest - extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom' - # for a 32-bit guest - #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom' - - -Start the guest. - - dom0# xm create -c solaris.cfg - Started domain solaris - v3.3.2 chgset 'unavailable' - SunOS Release 5.11 Version snv_124 64-bit - Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. - Use is subject to license terms. - Hostname: opensolaris - Remounting root read/write - Probing for device nodes ... - WARNING: emlxs: ddi_modopen drv/fct failed: err 2 - Preparing live image for use - Done mounting Live image - - -Make sure the network is configured. Note that it can take a minute for -the xnf0 interface to appear. - - opensolaris console login: jack - Password: jack - Sun Microsystems Inc. SunOS 5.11 snv_124 November 2008 - jack@opensolaris:~$ pfexec sh - sh-3.2# ifconfig -a - sh-3.2# exit - - -Set a password for VNC and start the VNC server which provides the X11 -display where the installation program runs. - - jack@opensolaris:~$ vncpasswd - Password: solaris - Verify: solaris - jack@opensolaris:~$ cp .Xclients .vnc/xstartup - jack@opensolaris:~$ vncserver :1 - - -From a remote machine connect to the VNC server. Use `ifconfig xnf0` on -the guest to find the correct IP address to use. - - remote$ vncviewer 172.18.2.99:1 - - -It is also possible to launch the installation on a remote X11 display. - - jack@opensolaris:~$ export DISPLAY=172.18.1.1:0 - jack@opensolaris:~$ pfexec gui-install - - -After the GUI installation is complete you will be asked to reboot. -Before that you need to determine the ZFS ID for the new boot filesystem -and update the configuration file accordingly. Return to the guest -console. - - jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs - bootfs = 43 - ^C - jack@opensolaris:~$ - - -The final configuration file should look like this. Note in particular -the last line. - - memory = 640 - name = 'solaris' - disk = [ 'phy:/dev/wd0k,0,w' ] - vif = [ 'bridge=bridge0' ] - kernel = '/root/solaris/unix' - ramdisk = '/root/solaris/x86.microroot' - extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"' - - -Restart the guest to verify it works correctly. - - dom0# xm destroy solaris - dom0# xm create -c solaris.cfg - Using config file "./solaris.cfg". - v3.3.2 chgset 'unavailable' - Started domain solaris - SunOS Release 5.11 Version snv_124 64-bit - Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. - Use is subject to license terms. - WARNING: emlxs: ddi_modopen drv/fct failed: err 2 - Hostname: osol - Configuring devices. - Loading smf(5) service descriptions: 160/160 - svccfg import warnings. See /var/svc/log/system-manifest-import:default.log . - Reading ZFS config: done. - Mounting ZFS filesystems: (6/6) - Creating new rsa public/private host key pair - Creating new dsa public/private host key pair - - osol console login: - - -Using PCI devices in guest domains ----------------------------------- - -The domain0 can give other domains access to selected PCI devices. This -can allow, for example, a non-privileged domain to have access to a -physical network interface or disk controller. However, keep in mind -that giving a domain access to a PCI device most likely will give the -domain read/write access to the whole physical memory, as PCs don't have -an IOMMU to restrict memory access to DMA-capable device. Also, it's not -possible to export ISA devices to non-domain0 domains (which means that -the primary VGA adapter can't be exported. A guest domain trying to -access the VGA registers will panic). - -This functionality is only available in NetBSD-5.1 (and later) domain0 -and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as -support has not been ported to later versions at this time. - -For a PCI device to be exported to a domU, is has to be attached to the -`pciback` driver in domain0. Devices passed to the domain0 via the -pciback.hide boot parameter will attach to `pciback` instead of the -usual driver. The list of devices is specified as `(bus:dev.func)`, -where bus and dev are 2-digit hexadecimal numbers, and func a -single-digit number: - - pciback.hide=(00:0a.0)(00:06.0) - -pciback devices should show up in the domain0's boot messages, and the -devices should be listed in the `/kern/xen/pci` directory. - -PCI devices to be exported to a domU are listed in the `pci` array of -the domU's config file, with the format `'0000:bus:dev.func'` - - pci = [ '0000:00:06.0', '0000:00:0a.0' ] - -In the domU an `xpci` device will show up, to which one or more pci -busses will attach. Then the PCI drivers will attach to PCI busses as -usual. Note that the default NetBSD DOMU kernels do not have `xpci` or -any PCI drivers built in by default; you have to build your own kernel -to use PCI devices in a domU. Here's a kernel config example: +See possibly outdated +[Solaris domU instructions](/ports/xen/howto-solaris/). - include "arch/i386/conf/XEN3_DOMU" - #include "arch/i386/conf/XENU" # in NetBSD 3.0 - # Add support for PCI busses to the XEN3_DOMU kernel - xpci* at xenbus ? - pci* at xpci ? +PCI passthrough: Using PCI devices in guest domains +--------------------------------------------------- - # Now add PCI and related devices to be used by this domain - # USB Controller and Devices - - # PCI USB controllers - uhci* at pci? dev ? function ? # Universal Host Controller (Intel) +The dom0 can give other domains access to selected PCI +devices. This can allow, for example, a non-privileged domain to have +access to a physical network interface or disk controller. However, +keep in mind that giving a domain access to a PCI device most likely +will give the domain read/write access to the whole physical memory, +as PCs don't have an IOMMU to restrict memory access to DMA-capable +device. Also, it's not possible to export ISA devices to non-dom0 +domains, which means that the primary VGA adapter can't be exported. +A guest domain trying to access the VGA registers will panic. + +If the dom0 is NetBSD, it has to be running Xen 3.1, as support has +not been ported to later versions at this time. + +For a PCI device to be exported to a domU, is has to be attached to +the "pciback" driver in dom0. Devices passed to the dom0 via the +pciback.hide boot parameter will attach to "pciback" instead of the +usual driver. The list of devices is specified as "(bus:dev.func)", +where bus and dev are 2-digit hexadecimal numbers, and func a +single-digit number: - # USB bus support - usb* at uhci? + pciback.hide=(00:0a.0)(00:06.0) - # USB Hubs - uhub* at usb? - uhub* at uhub? port ? configuration ? interface ? +pciback devices should show up in the dom0's boot messages, and the +devices should be listed in the `/kern/xen/pci` directory. - # USB Mass Storage - umass* at uhub? port ? configuration ? interface ? - wd* at umass? - # SCSI controllers - ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI +PCI devices to be exported to a domU are listed in the "pci" array of +the domU's config file, with the format "0000:bus:dev.func". - # SCSI bus support (for both ahc and umass) - scsibus* at scsi? + pci = [ '0000:00:06.0', '0000:00:0a.0' ] - # SCSI devices - sd* at scsibus? target ? lun ? # SCSI disk drives - cd* at scsibus? target ? lun ? # SCSI CD-ROM drives +In the domU an "xpci" device will show up, to which one or more pci +busses will attach. Then the PCI drivers will attach to PCI busses as +usual. Note that the default NetBSD DOMU kernels do not have "xpci" +or any PCI drivers built in by default; you have to build your own +kernel to use PCI devices in a domU. Here's a kernel config example; +note that only the "xpci" lines are unusual. + + include "arch/i386/conf/XEN3_DOMU" + + # Add support for PCI busses to the XEN3_DOMU kernel + xpci* at xenbus ? + pci* at xpci ? + + # PCI USB controllers + uhci* at pci? dev ? function ? # Universal Host Controller (Intel) + + # USB bus support + usb* at uhci? + + # USB Hubs + uhub* at usb? + uhub* at uhub? port ? configuration ? interface ? + + # USB Mass Storage + umass* at uhub? port ? configuration ? interface ? + wd* at umass? + # SCSI controllers + ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI + + # SCSI bus support (for both ahc and umass) + scsibus* at scsi? + + # SCSI devices + sd* at scsibus? target ? lun ? # SCSI disk drives + cd* at scsibus? target ? lun ? # SCSI CD-ROM drives NetBSD as a domU in a VPS @@ -1121,10 +830,29 @@ hardware. This section explains how to virtual private server where you do not control or have access to the dom0. -TODO: Perhaps reference panix, prmgr, amazon as interesting examples. +VPS operators provide varying degrees of access and mechanisms for +configuration. The big issue is usually how one controls which kernel +is booted, because the kernel is nominally in the dom0 filesystem (to +which VPS users do not normally have acesss). + +A VPS user may want to compile a kernel for security updates, to run +npf, run IPsec, or any other reason why someone would want to change +their kernel. + +One approach is to have an adminstrative interface to upload a kernel, +or to select from a prepopulated list. + +Otehr approaches are pvgrub and py-grub, which are ways to start a +bootloader from the dom0 instead of the actual domU kernel, and for +that loader to then load a kernel from the domU filesystem. This is +closer to a regular physical computer, where someone who controls a +machine can replace the kernel. + +prmgr and pvgrub +---------------- -TODO: Somewhere, discuss pvgrub and py-grub to load the domU kernel -from the domU filesystem. +TODO: Perhaps reference panix, prmgr, amazon as interesting examples. +Explain what prmgr does. Using npf ---------