Diff for /wikisrc/ports/xen/howto.mdwn between versions 1.49 and 1.52

version 1.49, 2014/12/26 20:25:19 version 1.52, 2014/12/26 23:46:22
Line 204  alternately with little problems, simply Line 204  alternately with little problems, simply
 Xen daemons when not running Xen.  Xen daemons when not running Xen.
   
 Note that NetBSD as dom0 does not support multiple CPUs.  This will  Note that NetBSD as dom0 does not support multiple CPUs.  This will
 limit the performance of the Xen/dom0 workstation approach.  limit the performance of the Xen/dom0 workstation approach.  In theory
   the only issue is that the "backend drivers" are not yet MPSAFE:
     http://mail-index.netbsd.org/netbsd-users/2014/08/29/msg015195.html
   
 Installation of NetBSD  Installation of NetBSD
 ----------------------  ----------------------
Line 744  tty to the xen console. Line 746  tty to the xen console.
 Creating an unprivileged Solaris domain (domU)  Creating an unprivileged Solaris domain (domU)
 ----------------------------------------------  ----------------------------------------------
   
 Download an Opensolaris [release](http://opensolaris.org/os/downloads/)  See possibly outdated
 or [development snapshot](http://genunix.org/) DVD image. Attach the DVD  [Solaris domU instructions](/ports/xen/howto-solaris/).
 image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem  
 image to your dom0 filesystem.  
   
     dom0# mkdir /root/solaris  
     dom0# vnconfig vnd0 osol-1002-124-x86.iso  
     dom0# mount /dev/vnd0a /mnt  
   
     ## for a 64-bit guest  
     dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris  
     dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris  
   
     ## for a 32-bit guest  
     dom0# cp /mnt/boot/x86.microroot /root/solaris  
     dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris  
   
     dom0# umount /mnt  
             
   
 Keep the MAN.VND.4 configured. For some reason the boot process stalls  
 unless the DVD image is attached to the guest as a "phy" device. Create  
 an initial configuration file with the following contents. Substitute  
 */dev/wd0k* with an empty partition at least 8 GB large.  
   
     memory = 640  
     name = 'solaris'  
     disk = [ 'phy:/dev/wd0k,0,w' ]  
     disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ]  
     vif = [ 'bridge=bridge0' ]  
     kernel = '/root/solaris/unix'  
     ramdisk = '/root/solaris/x86.microroot'  
     # for a 64-bit guest  
     extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom'  
     # for a 32-bit guest  
     #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom'  
             
   
 Start the guest.  
   
     dom0# xm create -c solaris.cfg  
     Started domain solaris  
                           v3.3.2 chgset 'unavailable'  
     SunOS Release 5.11 Version snv_124 64-bit  
     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.  
     Use is subject to license terms.  
     Hostname: opensolaris  
     Remounting root read/write  
     Probing for device nodes ...  
     WARNING: emlxs: ddi_modopen drv/fct failed: err 2  
     Preparing live image for use  
     Done mounting Live image  
             
   
 Make sure the network is configured. Note that it can take a minute for  
 the xnf0 interface to appear.  
   
     opensolaris console login: jack  
     Password: jack  
     Sun Microsystems Inc.   SunOS 5.11      snv_124 November 2008  
     jack@opensolaris:~$ pfexec sh  
     sh-3.2# ifconfig -a  
     sh-3.2# exit  
             
   
 Set a password for VNC and start the VNC server which provides the X11  
 display where the installation program runs.  
   
     jack@opensolaris:~$ vncpasswd  
     Password: solaris  
     Verify: solaris  
     jack@opensolaris:~$ cp .Xclients .vnc/xstartup  
     jack@opensolaris:~$ vncserver :1  
             
   
 From a remote machine connect to the VNC server. Use `ifconfig xnf0` on  
 the guest to find the correct IP address to use.  
   
     remote$ vncviewer 172.18.2.99:1  
             
   
 It is also possible to launch the installation on a remote X11 display.  
   
     jack@opensolaris:~$ export DISPLAY=172.18.1.1:0  
     jack@opensolaris:~$ pfexec gui-install  
              
   
 After the GUI installation is complete you will be asked to reboot.  
 Before that you need to determine the ZFS ID for the new boot filesystem  
 and update the configuration file accordingly. Return to the guest  
 console.  
   
     jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs  
                     bootfs = 43  
     ^C  
     jack@opensolaris:~$  
              
   
 The final configuration file should look like this. Note in particular  
 the last line.  
   
     memory = 640  
     name = 'solaris'  
     disk = [ 'phy:/dev/wd0k,0,w' ]  
     vif = [ 'bridge=bridge0' ]  
     kernel = '/root/solaris/unix'  
     ramdisk = '/root/solaris/x86.microroot'  
     extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"'  
              
   
 Restart the guest to verify it works correctly.  
   
     dom0# xm destroy solaris  
     dom0# xm create -c solaris.cfg  
     Using config file "./solaris.cfg".  
     v3.3.2 chgset 'unavailable'  
     Started domain solaris  
     SunOS Release 5.11 Version snv_124 64-bit  
     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.  
     Use is subject to license terms.  
     WARNING: emlxs: ddi_modopen drv/fct failed: err 2  
     Hostname: osol  
     Configuring devices.  
     Loading smf(5) service descriptions: 160/160  
     svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .  
     Reading ZFS config: done.  
     Mounting ZFS filesystems: (6/6)  
     Creating new rsa public/private host key pair  
     Creating new dsa public/private host key pair  
   
     osol console login:  
              
   
 Using PCI devices in guest domains  
 ----------------------------------  
   
 The domain0 can give other domains access to selected PCI devices. This  
 can allow, for example, a non-privileged domain to have access to a  
 physical network interface or disk controller. However, keep in mind  
 that giving a domain access to a PCI device most likely will give the  
 domain read/write access to the whole physical memory, as PCs don't have  
 an IOMMU to restrict memory access to DMA-capable device. Also, it's not  
 possible to export ISA devices to non-domain0 domains (which means that  
 the primary VGA adapter can't be exported. A guest domain trying to  
 access the VGA registers will panic).  
   
 This functionality is only available in NetBSD-5.1 (and later) domain0  
 and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as  
 support has not been ported to later versions at this time.  
   
 For a PCI device to be exported to a domU, is has to be attached to the  
 `pciback` driver in domain0. Devices passed to the domain0 via the  
 pciback.hide boot parameter will attach to `pciback` instead of the  
 usual driver. The list of devices is specified as `(bus:dev.func)`,  
 where bus and dev are 2-digit hexadecimal numbers, and func a  
 single-digit number:  
   
     pciback.hide=(00:0a.0)(00:06.0)  
   
 pciback devices should show up in the domain0's boot messages, and the  
 devices should be listed in the `/kern/xen/pci` directory.  
   
 PCI devices to be exported to a domU are listed in the `pci` array of  
 the domU's config file, with the format `'0000:bus:dev.func'`  
   
     pci = [ '0000:00:06.0', '0000:00:0a.0' ]  
   
 In the domU an `xpci` device will show up, to which one or more pci  
 busses will attach. Then the PCI drivers will attach to PCI busses as  
 usual. Note that the default NetBSD DOMU kernels do not have `xpci` or  
 any PCI drivers built in by default; you have to build your own kernel  
 to use PCI devices in a domU. Here's a kernel config example:  
   
     include         "arch/i386/conf/XEN3_DOMU"  
     #include         "arch/i386/conf/XENU"           # in NetBSD 3.0  
   
     # Add support for PCI busses to the XEN3_DOMU kernel  PCI passthrough: Using PCI devices in guest domains
     xpci* at xenbus ?  ---------------------------------------------------
     pci* at xpci ?  
   
     # Now add PCI and related devices to be used by this domain  The domain0 can give other domains access to selected PCI
     # USB Controller and Devices  devices. This can allow, for example, a non-privileged domain to have
   access to a physical network interface or disk controller.  However,
     # PCI USB controllers  keep in mind that giving a domain access to a PCI device most likely
     uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)  will give the domain read/write access to the whole physical memory,
   as PCs don't have an IOMMU to restrict memory access to DMA-capable
   device.  Also, it's not possible to export ISA devices to non-domain0
   domains, which means that the primary VGA adapter can't be exported.
   A guest domain trying to access the VGA registers will panic.
   
   If the domain0 is NetBSD, it has to be running Xen 3.1, as support has
   not been ported to later versions at this time.
   
   For a PCI device to be exported to a domU, is has to be attached to
   the "pciback" driver in dom0.  Devices passed to the dom0 via the
   pciback.hide boot parameter will attach to "pciback" instead of the
   usual driver.  The list of devices is specified as "(bus:dev.func)",
   where bus and dev are 2-digit hexadecimal numbers, and func a
   single-digit number:
   
     # USB bus support          pciback.hide=(00:0a.0)(00:06.0)
     usb*    at uhci?  
   
     # USB Hubs  pciback devices should show up in the dom0's boot messages, and the
     uhub*   at usb?  devices should be listed in the `/kern/xen/pci` directory.
     uhub*   at uhub? port ? configuration ? interface ?  
   
     # USB Mass Storage  PCI devices to be exported to a domU are listed in the "pci" array of
     umass*  at uhub? port ? configuration ? interface ?  the domU's config file, with the format "0000:bus:dev.func".
     wd*     at umass?  
     # SCSI controllers  
     ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI  
   
     # SCSI bus support (for both ahc and umass)          pci = [ '0000:00:06.0', '0000:00:0a.0' ]
     scsibus* at scsi?  
   
     # SCSI devices  In the domU an "xpci" device will show up, to which one or more pci
     sd*     at scsibus? target ? lun ?      # SCSI disk drives  busses will attach.  Then the PCI drivers will attach to PCI busses as
     cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives  usual.  Note that the default NetBSD DOMU kernels do not have "xpci"
   or any PCI drivers built in by default; you have to build your own
   kernel to use PCI devices in a domU.  Here's a kernel config example;
   note that only the "xpci" lines are unusual.
   
           include         "arch/i386/conf/XEN3_DOMU"
   
           # Add support for PCI busses to the XEN3_DOMU kernel
           xpci* at xenbus ?
           pci* at xpci ?
   
           # PCI USB controllers
           uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)
   
           # USB bus support
           usb*    at uhci?
   
           # USB Hubs
           uhub*   at usb?
           uhub*   at uhub? port ? configuration ? interface ?
   
           # USB Mass Storage
           umass*  at uhub? port ? configuration ? interface ?
           wd*     at umass?
           # SCSI controllers
           ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI
   
           # SCSI bus support (for both ahc and umass)
           scsibus* at scsi?
   
           # SCSI devices
           sd*     at scsibus? target ? lun ?      # SCSI disk drives
           cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives
   
   
 NetBSD as a domU in a VPS  NetBSD as a domU in a VPS
Line 959  hardware.  This section explains how to  Line 828  hardware.  This section explains how to 
 virtual private server where you do not control or have access to the  virtual private server where you do not control or have access to the
 dom0.  dom0.
   
 TODO: Perhaps reference panix, prmgr, amazon as interesting examples.  VPS operators provide varying degrees of access and mechanisms for
   configuration.  The big issue is usually how one controls which kernel
   is booted, because the kernel is nominally in the dom0 filesystem (to
   which VPS users do not normally have acesss).
   
   A VPS user may want to compile a kernel for security updates, to run
   npf, run IPsec, or any other reason why someone would want to change
   their kernel.
   
   One approach is to have an adminstrative interface to upload a kernel,
   or to select from a prepopulated list.
   
   Otehr approaches are pvgrub and py-grub, which are ways to start a
   bootloader from the dom0 instead of the actual domU kernel, and for
   that loader to then load a kernel from the domU filesystem.  This is
   closer to a regular physical computer, where someone who controls a
   machine can replace the kernel.
   
   prmgr and pvgrub
   ----------------
   
 TODO: Somewhere, discuss pvgrub and py-grub to load the domU kernel  TODO: Perhaps reference panix, prmgr, amazon as interesting examples.
 from the domU filesystem.  Explain what prmgr does.
   
 Using npf  Using npf
 ---------  ---------

Removed from v.1.49  
changed lines
  Added in v.1.52


CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb