File:  [NetBSD Developer Wiki] / wikisrc / ports / xen / howto.mdwn
Revision 1.21: download - view: text, annotated - select for diffs
Wed Dec 24 01:12:49 2014 UTC (4 years, 5 months ago) by gdt
Branches: MAIN
CVS tags: HEAD
rc.conf for 3.3, 4.1 and 4.2

    1: Introduction
    2: ============
    3: 
    4: [![[Xen
    5: screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png)
    6: 
    7: Xen is a virtual machine monitor or hypervisor for x86 hardware
    8: (i686-class or higher), which supports running multiple guest
    9: operating systems on a single physical machine.  With Xen, one uses
   10: the Xen kernel to control the CPU, memory and console, a dom0
   11: operating system which mediates access to other hardware (e.g., disks,
   12: network, USB), and one or more domU operating systems which operate in
   13: an unprivileged virtualized environment.  IO requests from the domU
   14: systems are forwarded by the hypervisor (Xen) to the dom0 to be
   15: fulfilled.
   16: 
   17: Xen supports two styles of guests.  The original is Para-Virtualized
   18: (PV) which means that the guest OS does not attempt to access hardware
   19: directly, but instead makes hypercalls to the hypervisor.  This is
   20: analogous to a user-space program making system calls.  (The dom0
   21: operating system uses PV calls for some functions, such as updating
   22: memory mapping page tables, but has direct hardware access for disk
   23: and network.)   PV guests must be specifically coded for Xen.
   24: 
   25: The more recent style is HVM, which means that the guest does not have
   26: code for Xen and need not be aware that it is running under Xen.
   27: Attempts to access hardware registers are trapped and emulated.  This
   28: style is less efficient but can run unmodified guests.
   29: 
   30: Generally any amd64 machine will work with Xen and PV guests.  For HVM
   31: guests, the VT or VMX cpu feature (Intel) or SVM/HVM/VT (amd64) is
   32: needed; "cpuctl identify 0" will show this.  TODO: Clean up and check
   33: the above features.  TODO: Explain if i386 (non-amd64) machines can
   34: still be used - I think that witthe requirement to use PAE kernels is
   35: about the hypervisor being amd64 only.
   36: 
   37: At boot, the dom0 kernel is loaded as module with Xen as the kernel.
   38: The dom0 can start one or more domUs.  (Booting is explained in detail
   39: in the dom0 section.)
   40: 
   41: NetBSD supports Xen in that it can serve as dom0, be used as a domU,
   42: and that Xen kernels and tools are available in pkgsrc.  This HOWTO
   43: attempts to address both the case of running a NetBSD dom0 on hardware
   44: and running NetBSD as a domU in a VPS.
   45: 
   46: Some versions of Xen support "PCI passthrough", which means that
   47: specific PCI devices can be made available to a specific domU instead
   48: of the dom0.  This can be useful to let a domU run X11, or access some
   49: network interface or other peripheral.
   50: 
   51: Prerequisites
   52: -------------
   53: 
   54: Installing NetBSD/Xen is not extremely difficult, but it is more
   55: complex than a normal installation of NetBSD.
   56: In general, this HOWTO is occasionally overly restrictive about how
   57: things must be done, guiding the reader to stay on the established
   58: path when there are no known good reasons to stray.
   59: 
   60: This HOWTO presumes a basic familiarity with the Xen system
   61: architecture.  This HOWTO presumes familiarity with installing NetBSD
   62: on i386/amd64 hardware and installing software from pkgsrc.
   63: See also the [Xen website](http://www.xen.org/).
   64: 
   65: History
   66: -------
   67: 
   68: NetBSD used to support Xen2; this has been removed.
   69: 
   70: Before NetBSD's native bootloader could support Xen, the use of
   71: grub was recommended.  If necessary, see the
   72: [old grub information](/xen/howto-grub/).
   73: 
   74: Versions of Xen and NetBSD
   75: ==========================
   76: 
   77: Most of the installation concepts and instructions are independent of
   78: Xen version.  This section gives advice on which version to choose.
   79: Versions not in pkgsrc and older unsupported versions of NetBSD are
   80: inentionally ignored.
   81: 
   82: Xen
   83: ---
   84: 
   85: In NetBSD, xen is provided in pkgsrc, via matching pairs of packages
   86: xenkernel and xentools.  We will refer only to the kernel versions,
   87: but note that both packages must be installed together and must have
   88: matching versions.
   89: 
   90: xenkernel3 and xenkernel33 provide Xen 3.1 and 3.3.  These no longer
   91: receive security patches and should not be used.  Xen 3.1 supports PCI
   92: passthrough.
   93: 
   94: xenkernel41 provides Xen 4.1.  This is no longer maintained by Xen,
   95: but as of 2014-12 receives backported security patches.  It is a
   96: reasonable although trailing-edge choice.
   97: 
   98: xenkernel42 provides Xen 4.2.  This is maintained by Xen, but old as
   99: of 2014-12.
  100: 
  101: Ideally newer versions of Xen will be added to pkgsrc.
  102: 
  103: Note that NetBSD support is called XEN3; it works with 3.1 through
  104: 4.2, because the hypercall interface has been stable.
  105: 
  106: Xen command program
  107: -------------------
  108: 
  109: Early Xen used a program called "xm" to manipulate the system from the
  110: dom0.  Starting in 4.1, a replacement program with similar behavior
  111: called "xl" is provided.  In 4.2, "xm" is no longer available.
  112: 
  113: NetBSD
  114: ------
  115: 
  116: The netbsd-5, netbsd-6, netbsd-7, and -current branches are all
  117: reasonable choices, with more or less the same considerations for
  118: non-Xen use.  Therefore, netbsd-6 is recommended as the stable version
  119: of the most recent release.
  120: 
  121: As of NetBSD 6, a NetBSD domU will support multiple vcpus.  There is
  122: no SMP support for NetBSD as dom0.  (The dom0 itself doesn't really
  123: need SMP; the lack of support is really a problem when using a dom0 as
  124: a normal computer.)
  125: 
  126: Architecture
  127: ------------
  128: 
  129: Xen is basically amd64 only at this point.  One can either run i386
  130: domains or amd64 domains.  If running i386, PAE versions are required,
  131: for both dom0 and domU.  These versions are built by default in NetBSD
  132: releases.  While i386 dom0 works fine, amd64 is recommended as more
  133: normal.  (Note that emacs (at least) fails if run on i386 with PAE when
  134: built without, and vice versa, presumably due to bugs in the undump
  135: code.)
  136: 
  137: Recommendation
  138: --------------
  139: 
  140: Therefore, this HOWTO recommends running xenkernel42 (and xentools42),
  141: xl, the NetBSD 6 stable branch, and to use amd64 as the dom0.  Either
  142: the i386 or amd64 of NetBSD may be used as domUs.
  143: 
  144: NetBSD as a dom0
  145: ================
  146: 
  147: NetBSD can be used as a dom0 and works very well.  The following
  148: sections address installation, updating NetBSD, and updating Xen.
  149: Note that it doesn't make sense to talk about installing a dom0 OS
  150: without also installing Xen itself.  We first address installing
  151: NetBSD, which is not yet a dom0, and then adding Xen, pivoting the
  152: NetBSD install to a dom0 install by just changing the kernel and boot
  153: configuration.
  154: 
  155: Styles of dom0 operation
  156: ------------------------
  157: 
  158: There are two basic ways to use Xen.  The traditional method is for
  159: the dom0 to do absolutely nothing other than providing support to some
  160: number of domUs.  Such a system was probably installed for the sole
  161: purpose of hosting domUs, and sits in a server room on a UPS.
  162: 
  163: The other way is to put Xen under a normal-usage computer, so that the
  164: dom0 is what the computer would have been without Xen, perhaps a
  165: desktop or laptop.  Then, one can run domUs at will.  Purists will
  166: deride this as less secure than the previous approach, and for a
  167: computer whose purpose is to run domUs, they are right.  But Xen and a
  168: dom0 (without domUs) is not meaingfully less secure than the same
  169: things running without Xen.  One can boot Xen or boot regular NetBSD
  170: alternately with little problems, simply refraining from starting the
  171: Xen daemons when not running Xen.
  172: 
  173: Note that NetBSD as dom0 does not support multiple CPUs.  This will
  174: limit the performance of the Xen/dom0 workstation approach.
  175: 
  176: Installation of NetBSD
  177: ----------------------
  178: 
  179: First,
  180: [install NetBSD/amd64](../../docs/guide/en/chap-inst.html)
  181: just as you would if you were not using Xen.
  182: However, the partitioning approach is very important.
  183: 
  184: If you want to use RAIDframe for the dom0, there are no special issues
  185: for Xen.  Typically one provides RAID storage for the dom0, and the
  186: domU systems are unaware of RAID.
  187: 
  188: There are 4 styles of providing backing storage for the virtual disks
  189: used by domUs: raw partitions, LVM, file-backed vnd(4), and SAN,
  190: 
  191: With raw partitions, one has a disklabel (or gpt) partition sized for
  192: each virtual disk to be used by the domU.  (If you are able to predict
  193: how domU usage will evolve, please add an explanation to the HOWTO.
  194: Seriously, needs tend to change over time.)
  195: 
  196: One can use lvm(8) to create logical devices to use for domU disks.
  197: This is almost as efficient sa raw disk partitions and more flexible.
  198: Hence raw disk partitions should typically not be used.
  199: 
  200: One can use files in the dom0 filesystem, typically created by dd'ing
  201: /dev/zero to create a specific size.  This is somewhat less efficient,
  202: but very convenient, as one can cp the files for backup, or move them
  203: between dom0 hosts.
  204: 
  205: Finally, in theory one can place the files backing the domU disks in a
  206: SAN.  (This is an invitation for someone who has done this to add a
  207: HOWTO page.)
  208: 
  209: Installation of Xen
  210: -------------------
  211: 
  212: In the dom0, install sysutils/xenkernel42 and sysutils/xentools42 from
  213: pkgsrc (or another matching pair).
  214: See [the pkgsrc
  215: documentation](http://www.NetBSD.org/docs/pkgsrc/) for help with pkgsrc.
  216: 
  217: For Xen 3.1, support for HVM guests is in sysutils/xentool3-hvm.  More
  218: recent versions have HVM support integrated in the main xentools
  219: package.  It is entirely reasonable to run only PV guests.
  220: 
  221: Next you need to install the selected Xen kernel itself, which is
  222: installed by pkgsrc as "/usr/pkg/xen*-kernel/xen.gz".  Copy it to /.
  223: For debugging, one may copy xen-debug.gz; this is conceptually similar
  224: to DIAGNOSTIC and DEBUG in NetBSD.  xen-debug.gz is basically only
  225: useful with a serial console.  Then, place a NetBSD XEN3_DOM0 kernel
  226: in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz
  227: of a NetBSD build.  Both xen and NetBSD may be left compressed.  (If
  228: using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.)
  229: 
  230: In a dom0 kernel, kernfs is mandatory for xend to comunicate with the
  231: kernel, so ensure that /kern is in fstab.
  232: 
  233: Because you already installed NetBSD, you have a working boot setup
  234: with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the
  235: beginning of your root filesystem, /boot present, and likely
  236: /boot.cfg.  (If not, fix before continuing!)
  237: 
  238: See boot.cfg(5) for an example.  The basic line is
  239: 
  240: "menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M"
  241: 
  242: which specifies that the dom0 should have 256M, leaving the rest to be
  243: allocated for domUs.
  244: 
  245: As with non-Xen systems, you should have a line to boot /netbsd (a
  246: kernel that works without Xen) and fallback versions of the non-Xen
  247: kernel, Xen, and the dom0 kernel.
  248: 
  249: Configuring Xen
  250: ---------------
  251: 
  252: Now, you have a system that will boot Xen and the dom0 kernel, and
  253: just run the dom0 kernel.  There will be no domUs, and none can be
  254: started because you still have to configure the dom0 tools.
  255: 
  256: For 3.3 (and probably 3.1), add to rc.conf (but note that you should
  257: have installed 4.2):
  258:   xend=YES
  259:   xenbackendd=YES
  260: 
  261: For 4.1 and 4.2, add to rc.conf:
  262:   xend=YES
  263:   xencommons=YES
  264: 
  265: Updating NetBSD in a dom0
  266: -------------------------
  267: 
  268: This is just like updating NetBSD on bare hardware, assuming the new
  269: version supports the version of Xen you are running.  Generally, one
  270: replaces the kernel and reboots, and then overlays userland binaries
  271: and adjusts /etc.
  272: 
  273: Note that one must update both the non-Xen kernel typically used for
  274: rescue purposes and the DOM0 kernel used with Xen.
  275: 
  276: Updating Xen versions
  277: ---------------------
  278: 
  279: Updating Xen is conceptually not difficult, but can run into all the
  280: issues found when installing Xen.  Assuming migration from 4.1 to 4.2,
  281: remove the xenkernel41 and xentools41 packages and install the
  282: xenkernel42 and xentools42 packages.  Copy the 4.2 xen.gz to /.
  283: 
  284: Ensure that the contents of /etc/rc.d/xen* are correct.  Enable the
  285: correct set of daemons.  Ensure that the domU config files are valid
  286: for the new version.
  287: 
  288: Creating unprivileged domains (domU)
  289: ====================================
  290: 
  291: Creating domUs is almost entirely independent of operating system.  We
  292: first explain NetBSD, and then differences for Linux and Solaris.
  293: 
  294: Creating an unprivileged NetBSD domain (domU)
  295: ---------------------------------------------
  296: 
  297: Once you have *domain0* running, you need to start the xen tool daemon
  298: (`/usr/pkg/share/examples/rc.d/xend start`) and the xen backend daemon
  299: (`/usr/pkg/share/examples/rc.d/xenbackendd start` for Xen3\*,
  300: `/usr/pkg/share/examples/rc.d/xencommons start` for Xen4.\*). Make sure
  301: that `/dev/xencons` and `/dev/xenevt` exist before starting `xend`. You
  302: can create them with this command:
  303: 
  304:     # cd /dev && sh MAKEDEV xen
  305: 
  306: xend will write logs to `/var/log/xend.log` and
  307: `/var/log/xend-debug.log`. You can then control xen with the xm tool.
  308: 'xm list' will show something like:
  309: 
  310:     # xm list
  311:     Name              Id  Mem(MB)  CPU  State  Time(s)  Console
  312:     Domain-0           0       64    0  r----     58.1
  313: 
  314: 'xm create' allows you to create a new domain. It uses a config file in
  315: PKG\_SYSCONFDIR for its parameters. By default, this file will be in
  316: `/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which
  317: will be executed in the new domain (this kernel is in the *domain0* file
  318: system, not on the new domain virtual disk; but please note, you should
  319: install the same kernel into *domainU* as `/netbsd` in order to make
  320: your system tools, like MAN.SAVECORE.8, work). A suitable kernel is
  321: provided as part of the i386 and amd64 binary sets: XEN3\_DOMU.
  322: 
  323: Here is an /usr/pkg/etc/xen/nbsd example config file:
  324: 
  325:     #  -*- mode: python; -*-
  326:     #============================================================================
  327:     # Python defaults setup for 'xm create'.
  328:     # Edit this file to reflect the configuration of your system.
  329:     #============================================================================
  330: 
  331:     #----------------------------------------------------------------------------
  332:     # Kernel image file. This kernel will be loaded in the new domain.
  333:     kernel = "/home/bouyer/netbsd-XEN3_DOMU"
  334:     #kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"
  335: 
  336:     # Memory allocation (in megabytes) for the new domain.
  337:     memory = 128
  338: 
  339:     # A handy name for your new domain. This will appear in 'xm list',
  340:     # and you can use this as parameters for xm in place of the domain
  341:     # number. All domains must have different names.
  342:     #
  343:     name = "nbsd"
  344: 
  345:     # The number of virtual CPUs this domain has.
  346:     #
  347:     vcpus = 1
  348: 
  349:     #----------------------------------------------------------------------------
  350:     # Define network interfaces for the new domain.
  351: 
  352:     # Number of network interfaces (must be at least 1). Default is 1.
  353:     nics = 1
  354: 
  355:     # Define MAC and/or bridge for the network interfaces.
  356:     #
  357:     # The MAC address specified in ``mac'' is the one used for the interface
  358:     # in the new domain. The interface in domain0 will use this address XOR'd
  359:     # with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random
  360:     # MACs are assigned if not given.
  361:     #
  362:     # ``bridge'' is a required parameter, which will be passed to the
  363:     # vif-script called by xend(8) when a new domain is created to configure
  364:     # the new xvif interface in domain0.
  365:     #
  366:     # In this example, the xvif is added to bridge0, which should have been
  367:     # set up prior to the new domain being created -- either in the
  368:     # ``network'' script or using a /etc/ifconfig.bridge0 file.
  369:     #
  370:     vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ]
  371: 
  372:     #----------------------------------------------------------------------------
  373:     # Define the disk devices you want the domain to have access to, and
  374:     # what you want them accessible as.
  375:     #
  376:     # Each disk entry is of the form:
  377:     #
  378:     #   phy:DEV,VDEV,MODE
  379:     #
  380:     # where DEV is the device, VDEV is the device name the domain will see,
  381:     # and MODE is r for read-only, w for read-write.  You can also create
  382:     # file-backed domains using disk entries of the form:
  383:     #
  384:     #   file:PATH,VDEV,MODE
  385:     #
  386:     # where PATH is the path to the file used as the virtual disk, and VDEV
  387:     # and MODE have the same meaning as for ``phy'' devices.
  388:     #
  389:     # VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index),
  390:     # but it does for Linux.
  391:     # Worse, the device has to exist in /dev/ of domain0, because xm will
  392:     # try to stat() it. This means that in order to load a Linux guest OS
  393:     # from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ...
  394:     # on domain0, with the major/minor from Linux :(
  395:     # Alternatively it's possible to specify the device number in hex,
  396:     # e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ...
  397: 
  398:     disk = [ 'phy:/dev/wd0e,0x1,w' ]
  399:     #disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ]
  400:     #disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ]
  401: 
  402:     #----------------------------------------------------------------------------
  403:     # Set the kernel command line for the new domain.
  404: 
  405:     # Set root device. This one does matter for NetBSD
  406:     root = "xbd0"
  407:     # extra parameters passed to the kernel
  408:     # this is where you can set boot flags like -s, -a, etc ...
  409:     #extra = ""
  410: 
  411:     #----------------------------------------------------------------------------
  412:     # Set according to whether you want the domain restarted when it exits.
  413:     # The default is False.
  414:     #autorestart = True
  415: 
  416:     # end of nbsd config file ====================================================
  417: 
  418: When a new domain is created, xen calls the
  419: `/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface
  420: created in *domain0*. This can be used to automatically configure the
  421: xvif?.? interfaces in *domain0*. In our example, these will be bridged
  422: with the bridge0 device in *domain0*, but the bridge has to exist first.
  423: To do this, create the file `/etc/ifconfig.bridge0` and make it look
  424: like this:
  425: 
  426:     create
  427:     !brconfig $int add ex0 up
  428: 
  429: (replace `ex0` with the name of your physical interface). Then bridge0
  430: will be created on boot. See the MAN.BRIDGE.4 man page for details.
  431: 
  432: So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a
  433: working vif-bridge is also provided with xentools20) configuring:
  434: 
  435:     #!/bin/sh
  436:     #============================================================================
  437:     # $NetBSD: howto.mdwn,v 1.21 2014/12/24 01:12:49 gdt Exp $
  438:     #
  439:     # /usr/pkg/etc/xen/vif-bridge
  440:     #
  441:     # Script for configuring a vif in bridged mode with a dom0 interface.
  442:     # The xend(8) daemon calls a vif script when bringing a vif up or down.
  443:     # The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp
  444:     # in the ``vif-script'' field.
  445:     #
  446:     # Usage: vif-bridge up|down [var=value ...]
  447:     #
  448:     # Actions:
  449:     #    up     Adds the vif interface to the bridge.
  450:     #    down   Removes the vif interface from the bridge.
  451:     #
  452:     # Variables:
  453:     #    domain name of the domain the interface is on (required).
  454:     #    vifq   vif interface name (required).
  455:     #    mac    vif MAC address (required).
  456:     #    bridge bridge to add the vif to (required).
  457:     #
  458:     # Example invocation:
  459:     #
  460:     # vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0
  461:     #
  462:     #============================================================================
  463: 
  464:     # Exit if anything goes wrong
  465:     set -e
  466: 
  467:     echo "vif-bridge $*"
  468: 
  469:     # Operation name.
  470:     OP=$1; shift
  471: 
  472:     # Pull variables in args into environment
  473:     for arg ; do export "${arg}" ; done
  474: 
  475:     # Required parameters. Fail if not set.
  476:     domain=${domain:?}
  477:     vif=${vif:?}
  478:     mac=${mac:?}
  479:     bridge=${bridge:?}
  480: 
  481:     # Optional parameters. Set defaults.
  482:     ip=${ip:-''}   # default to null (do nothing)
  483: 
  484:     # Are we going up or down?
  485:     case $OP in
  486:     up) brcmd='add' ;;
  487:     down)   brcmd='delete' ;;
  488:     *)
  489:         echo 'Invalid command: ' $OP
  490:         echo 'Valid commands are: up, down'
  491:         exit 1
  492:         ;;
  493:     esac
  494: 
  495:     # Don't do anything if the bridge is "null".
  496:     if [ "${bridge}" = "null" ] ; then
  497:         exit
  498:     fi
  499: 
  500:     # Don't do anything if the bridge doesn't exist.
  501:     if ! ifconfig -l | grep "${bridge}" >/dev/null; then
  502:         exit
  503:     fi
  504: 
  505:     # Add/remove vif to/from bridge.
  506:     ifconfig x${vif} $OP
  507:     brconfig ${bridge} ${brcmd} x${vif}
  508: 
  509: Now, running
  510: 
  511:     xm create -c /usr/pkg/etc/xen/nbsd
  512: 
  513: should create a domain and load a NetBSD kernel in it. (Note: `-c`
  514: causes xm to connect to the domain's console once created.) The kernel
  515: will try to find its root file system on xbd0 (i.e., wd0e) which hasn't
  516: been created yet. wd0e will be seen as a disk device in the new domain,
  517: so it will be 'sub-partitioned'. We could attach a ccd to wd0e in
  518: *domain0* and partition it, newfs and extract the NetBSD/i386 or amd64
  519: tarballs there, but there's an easier way: load the
  520: `netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets.
  521: Like other install kernels, it contains a ramdisk with sysinst, so you
  522: can install NetBSD using sysinst on your new domain.
  523: 
  524: If you want to install NetBSD/Xen with a CDROM image, the following line
  525: should be used in the `/usr/pkg/etc/xen/nbsd` file:
  526: 
  527:     disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]
  528: 
  529: After booting the domain, the option to install via CDROM may be
  530: selected. The CDROM device should be changed to `xbd1d`.
  531: 
  532: Once done installing, `halt -p` the new domain (don't reboot or halt, it
  533: would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the
  534: config file), switch the config file back to the XEN3\_DOMU kernel, and
  535: start the new domain again. Now it should be able to use `root on xbd0a`
  536: and you should have a second, functional NetBSD system on your xen
  537: installation.
  538: 
  539: When the new domain is booting you'll see some warnings about *wscons*
  540: and the pseudo-terminals. These can be fixed by editing the files
  541: `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in
  542: `/etc/ttys`, except *console*, like this:
  543: 
  544:     console "/usr/libexec/getty Pc"         vt100   on secure
  545:     ttyE0   "/usr/libexec/getty Pc"         vt220   off secure
  546:     ttyE1   "/usr/libexec/getty Pc"         vt220   off secure
  547:     ttyE2   "/usr/libexec/getty Pc"         vt220   off secure
  548:     ttyE3   "/usr/libexec/getty Pc"         vt220   off secure
  549: 
  550: Finally, all screens must be commented out from `/etc/wscons.conf`.
  551: 
  552: It is also desirable to add
  553: 
  554:     powerd=YES
  555: 
  556: in rc.conf. This way, the domain will be properly shut down if
  557: `xm shutdown -R` or `xm shutdown -H` is used on the domain0.
  558: 
  559: Your domain should be now ready to work, enjoy.
  560: 
  561: Creating an unprivileged Linux domain (domU)
  562: --------------------------------------------
  563: 
  564: Creating unprivileged Linux domains isn't much different from
  565: unprivileged NetBSD domains, but there are some details to know.
  566: 
  567: First, the second parameter passed to the disk declaration (the '0x1' in
  568: the example below)
  569: 
  570:     disk = [ 'phy:/dev/wd0e,0x1,w' ]
  571: 
  572: does matter to Linux. It wants a Linux device number here (e.g. 0x300
  573: for hda). Linux builds device numbers as: (major \<\< 8 + minor). So,
  574: hda1 which has major 3 and minor 1 on a Linux system will have device
  575: number 0x301. Alternatively, devices names can be used (hda, hdb, ...)
  576: as xentools has a table to map these names to devices numbers. To export
  577: a partition to a Linux guest we can use:
  578: 
  579:     disk = [ 'phy:/dev/wd0e,0x300,w' ]
  580:     root = "/dev/hda1 ro"
  581: 
  582: and it will appear as /dev/hda on the Linux system, and be used as root
  583: partition.
  584: 
  585: To install the Linux system on the partition to be exported to the guest
  586: domain, the following method can be used: install sysutils/e2fsprogs
  587: from pkgsrc. Use mke2fs to format the partition that will be the root
  588: partition of your Linux domain, and mount it. Then copy the files from a
  589: working Linux system, make adjustments in `/etc` (fstab, network
  590: config). It should also be possible to extract binary packages such as
  591: .rpm or .deb directly to the mounted partition using the appropriate
  592: tool, possibly running under NetBSD's Linux emulation. Once the
  593: filesystem has been populated, umount it. If desirable, the filesystem
  594: can be converted to ext3 using tune2fs -j. It should now be possible to
  595: boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels
  596: available in the Xen binary distribution.
  597: 
  598: To get the linux console right, you need to add:
  599: 
  600:     extra = "xencons=tty1"
  601: 
  602: to your configuration since not all linux distributions auto-attach a
  603: tty to the xen console.
  604: 
  605: Creating an unprivileged Solaris domain (domU)
  606: ----------------------------------------------
  607: 
  608: Download an Opensolaris [release](http://opensolaris.org/os/downloads/)
  609: or [development snapshot](http://genunix.org/) DVD image. Attach the DVD
  610: image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem
  611: image to your dom0 filesystem.
  612: 
  613:     dom0# mkdir /root/solaris
  614:     dom0# vnconfig vnd0 osol-1002-124-x86.iso
  615:     dom0# mount /dev/vnd0a /mnt
  616: 
  617:     ## for a 64-bit guest
  618:     dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris
  619:     dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris
  620: 
  621:     ## for a 32-bit guest
  622:     dom0# cp /mnt/boot/x86.microroot /root/solaris
  623:     dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris
  624: 
  625:     dom0# umount /mnt
  626:           
  627: 
  628: Keep the MAN.VND.4 configured. For some reason the boot process stalls
  629: unless the DVD image is attached to the guest as a "phy" device. Create
  630: an initial configuration file with the following contents. Substitute
  631: */dev/wd0k* with an empty partition at least 8 GB large.
  632: 
  633:     memory = 640
  634:     name = 'solaris'
  635:     disk = [ 'phy:/dev/wd0k,0,w' ]
  636:     disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ]
  637:     vif = [ 'bridge=bridge0' ]
  638:     kernel = '/root/solaris/unix'
  639:     ramdisk = '/root/solaris/x86.microroot'
  640:     # for a 64-bit guest
  641:     extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom'
  642:     # for a 32-bit guest
  643:     #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom'
  644:           
  645: 
  646: Start the guest.
  647: 
  648:     dom0# xm create -c solaris.cfg
  649:     Started domain solaris
  650:                           v3.3.2 chgset 'unavailable'
  651:     SunOS Release 5.11 Version snv_124 64-bit
  652:     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
  653:     Use is subject to license terms.
  654:     Hostname: opensolaris
  655:     Remounting root read/write
  656:     Probing for device nodes ...
  657:     WARNING: emlxs: ddi_modopen drv/fct failed: err 2
  658:     Preparing live image for use
  659:     Done mounting Live image
  660:           
  661: 
  662: Make sure the network is configured. Note that it can take a minute for
  663: the xnf0 interface to appear.
  664: 
  665:     opensolaris console login: jack
  666:     Password: jack
  667:     Sun Microsystems Inc.   SunOS 5.11      snv_124 November 2008
  668:     jack@opensolaris:~$ pfexec sh
  669:     sh-3.2# ifconfig -a
  670:     sh-3.2# exit
  671:           
  672: 
  673: Set a password for VNC and start the VNC server which provides the X11
  674: display where the installation program runs.
  675: 
  676:     jack@opensolaris:~$ vncpasswd
  677:     Password: solaris
  678:     Verify: solaris
  679:     jack@opensolaris:~$ cp .Xclients .vnc/xstartup
  680:     jack@opensolaris:~$ vncserver :1
  681:           
  682: 
  683: From a remote machine connect to the VNC server. Use `ifconfig xnf0` on
  684: the guest to find the correct IP address to use.
  685: 
  686:     remote$ vncviewer 172.18.2.99:1
  687:           
  688: 
  689: It is also possible to launch the installation on a remote X11 display.
  690: 
  691:     jack@opensolaris:~$ export DISPLAY=172.18.1.1:0
  692:     jack@opensolaris:~$ pfexec gui-install
  693:            
  694: 
  695: After the GUI installation is complete you will be asked to reboot.
  696: Before that you need to determine the ZFS ID for the new boot filesystem
  697: and update the configuration file accordingly. Return to the guest
  698: console.
  699: 
  700:     jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs
  701:                     bootfs = 43
  702:     ^C
  703:     jack@opensolaris:~$
  704:            
  705: 
  706: The final configuration file should look like this. Note in particular
  707: the last line.
  708: 
  709:     memory = 640
  710:     name = 'solaris'
  711:     disk = [ 'phy:/dev/wd0k,0,w' ]
  712:     vif = [ 'bridge=bridge0' ]
  713:     kernel = '/root/solaris/unix'
  714:     ramdisk = '/root/solaris/x86.microroot'
  715:     extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"'
  716:            
  717: 
  718: Restart the guest to verify it works correctly.
  719: 
  720:     dom0# xm destroy solaris
  721:     dom0# xm create -c solaris.cfg
  722:     Using config file "./solaris.cfg".
  723:     v3.3.2 chgset 'unavailable'
  724:     Started domain solaris
  725:     SunOS Release 5.11 Version snv_124 64-bit
  726:     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
  727:     Use is subject to license terms.
  728:     WARNING: emlxs: ddi_modopen drv/fct failed: err 2
  729:     Hostname: osol
  730:     Configuring devices.
  731:     Loading smf(5) service descriptions: 160/160
  732:     svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .
  733:     Reading ZFS config: done.
  734:     Mounting ZFS filesystems: (6/6)
  735:     Creating new rsa public/private host key pair
  736:     Creating new dsa public/private host key pair
  737: 
  738:     osol console login:
  739:            
  740: 
  741: Using PCI devices in guest domains
  742: ----------------------------------
  743: 
  744: The domain0 can give other domains access to selected PCI devices. This
  745: can allow, for example, a non-privileged domain to have access to a
  746: physical network interface or disk controller. However, keep in mind
  747: that giving a domain access to a PCI device most likely will give the
  748: domain read/write access to the whole physical memory, as PCs don't have
  749: an IOMMU to restrict memory access to DMA-capable device. Also, it's not
  750: possible to export ISA devices to non-domain0 domains (which means that
  751: the primary VGA adapter can't be exported. A guest domain trying to
  752: access the VGA registers will panic).
  753: 
  754: This functionality is only available in NetBSD-5.1 (and later) domain0
  755: and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as
  756: support has not been ported to later versions at this time.
  757: 
  758: For a PCI device to be exported to a domU, is has to be attached to the
  759: `pciback` driver in domain0. Devices passed to the domain0 via the
  760: pciback.hide boot parameter will attach to `pciback` instead of the
  761: usual driver. The list of devices is specified as `(bus:dev.func)`,
  762: where bus and dev are 2-digit hexadecimal numbers, and func a
  763: single-digit number:
  764: 
  765:     pciback.hide=(00:0a.0)(00:06.0)
  766: 
  767: pciback devices should show up in the domain0's boot messages, and the
  768: devices should be listed in the `/kern/xen/pci` directory.
  769: 
  770: PCI devices to be exported to a domU are listed in the `pci` array of
  771: the domU's config file, with the format `'0000:bus:dev.func'`
  772: 
  773:     pci = [ '0000:00:06.0', '0000:00:0a.0' ]
  774: 
  775: In the domU an `xpci` device will show up, to which one or more pci
  776: busses will attach. Then the PCI drivers will attach to PCI busses as
  777: usual. Note that the default NetBSD DOMU kernels do not have `xpci` or
  778: any PCI drivers built in by default; you have to build your own kernel
  779: to use PCI devices in a domU. Here's a kernel config example:
  780: 
  781:     include         "arch/i386/conf/XEN3_DOMU"
  782:     #include         "arch/i386/conf/XENU"           # in NetBSD 3.0
  783: 
  784:     # Add support for PCI busses to the XEN3_DOMU kernel
  785:     xpci* at xenbus ?
  786:     pci* at xpci ?
  787: 
  788:     # Now add PCI and related devices to be used by this domain
  789:     # USB Controller and Devices
  790: 
  791:     # PCI USB controllers
  792:     uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)
  793: 
  794:     # USB bus support
  795:     usb*    at uhci?
  796: 
  797:     # USB Hubs
  798:     uhub*   at usb?
  799:     uhub*   at uhub? port ? configuration ? interface ?
  800: 
  801:     # USB Mass Storage
  802:     umass*  at uhub? port ? configuration ? interface ?
  803:     wd*     at umass?
  804:     # SCSI controllers
  805:     ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI
  806: 
  807:     # SCSI bus support (for both ahc and umass)
  808:     scsibus* at scsi?
  809: 
  810:     # SCSI devices
  811:     sd*     at scsibus? target ? lun ?      # SCSI disk drives
  812:     cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives
  813: 
  814: Links and further information
  815: =============================
  816: 
  817: -   The [HowTo on Installing into RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html)
  818:     explains how to set up booting a dom0 with Xen using grub 
  819:     with NetBSD's RAIDframe.  (This is obsolete with the use of
  820:     NetBSD's native boot.)
  821: -   An example of how to use NetBSD's native bootloader to load
  822:     NetBSD/Xen instead of Grub can be found in the i386/amd64 boot(8)
  823:     and boot.cfg(5) manpages.

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb