File:  [NetBSD Developer Wiki] / wikisrc / ports / xen / howto.mdwn
Revision 1.11: download - view: text, annotated - select for diffs
Tue Dec 23 23:25:57 2014 UTC (4 years, 11 months ago) by gdt
Branches: MAIN
CVS tags: HEAD
remove static TOC.

This is apparently from the pandoc conversion.

    1: Introduction
    2: ------------
    3: 
    4: [![[Xen
    5: screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png)
    6: 
    7: Xen is a virtual machine monitor for x86 hardware (requires i686-class
    8: CPUs), which supports running multiple guest operating systems on a
    9: single machine. Guest OSes (also called “domains”) require a modified
   10: kernel which supports Xen hypercalls in replacement to access to the
   11: physical hardware. At boot, the Xen kernel (also known as the Xen
   12: hypervisor) is loaded (via the bootloader) along with the guest kernel
   13: for the first domain (called *domain0*). The Xen kernel has to be loaded
   14: using the multiboot protocol. You would use the NetBSD boot loader for
   15: this, or alternatively the `grub` boot loader (`grub` has some
   16: limitations, detailed below). *domain0* has special privileges to access
   17: the physical hardware (PCI and ISA devices), administrate other domains
   18: and provide virtual devices (disks and network) to other domains that
   19: lack those privileges. For more details, see [](http://www.xen.org/).
   20: 
   21: NetBSD can be used for both *domain0 (Dom0)* and further, unprivileged
   22: (DomU) domains. (Actually there can be multiple privileged domains
   23: accessing different parts of the hardware, all providing virtual devices
   24: to unprivileged domains. We will only talk about the case of a single
   25: privileged domain, *domain0*). *domain0* will see physical devices much
   26: like a regular i386 or amd64 kernel, and will own the physical console
   27: (VGA or serial). Unprivileged domains will only see a character-only
   28: virtual console, virtual disks (`xbd`) and virtual network interfaces
   29: (`xennet`) provided by a privileged domain (usually *domain0*). xbd
   30: devices are connected to a block device (i.e., a partition of a disk,
   31: raid, ccd, ... device) in the privileged domain. xennet devices are
   32: connected to virtual devices in the privileged domain, named
   33: xvif\<domain number\>.\<if number for this domain\>, e.g., xvif1.0. Both
   34: xennet and xvif devices are seen as regular Ethernet devices (they can
   35: be seen as a crossover cable between 2 PCs) and can be assigned
   36: addresses (and be routed or NATed, filtered using IPF, etc ...) or be
   37: added as part of a bridge.
   38: 
   39: Installing NetBSD as privileged domain (Dom0)
   40: ---------------------------------------------
   41: 
   42: First do a NetBSD/i386 or NetBSD/amd64
   43: [installation](../../docs/guide/en/chap-inst.html) of the 5.1 release
   44: (or newer) as you usually do on x86 hardware. The binary releases are
   45: available from [](ftp://ftp.NetBSD.org/pub/NetBSD/). Binary snapshots
   46: for current and the stable branches are available on daily autobuilds.
   47: If you plan to use the `grub` boot loader, when partitioning the disk
   48: you have to make the root partition smaller than 512Mb, and formatted as
   49: FFSv1 with 8k block/1k fragments. If the partition is larger than this,
   50: uses FFSv2 or has different block/fragment sizes, grub may fail to load
   51: some files. Also keep in mind that you'll probably want to provide
   52: virtual disks to other domains, so reserve some partitions for these
   53: virtual disks. Alternatively, you can create large files in the file
   54: system, map them to vnd(4) devices and export theses vnd devices to
   55: other domains.
   56: 
   57: Next step is to install the Xen packages via pkgsrc or from binary
   58: packages. See [the pkgsrc
   59: documentation](http://www.NetBSD.org/docs/pkgsrc/) if you are unfamiliar
   60: with pkgsrc and/or handling of binary packages. Xen 3.1, 3.3, 4.1 and
   61: 4.2 are available. 3.1 supports PCI pass-through while other versions do
   62: not. You'll need either `sysutils/xentools3` and `sysutils/xenkernel3`
   63: for Xen 3.1, `sysutils/xentools33` and `sysutils/xenkernel33` for Xen
   64: 3.3, `sysutils/xentools41` and `sysutils/xenkernel41` for Xen 4.1. or
   65: `sysutils/xentools42` and `sysutils/xenkernel42` for Xen 4.2. You'll
   66: also need `sysutils/grub` if you plan do use the grub boot loader. If
   67: using Xen 3.1, you may also want to install `sysutils/xentools3-hvm`
   68: which contains the utilities to run unmodified guests OSes using the
   69: *HVM* support (for later versions this is included in
   70: `sysutils/xentools`). Note that your CPU needs to support this. Intel
   71: CPUs must have the 'VT' instruction, AMD CPUs the 'SVM' instruction. You
   72: can easily find out if your CPU support HVM by using NetBSD's cpuctl
   73: command:
   74: 
   75:     # cpuctl identify 0
   76:     cpu0: Intel Core 2 (Merom) (686-class), id 0x6f6
   77:     cpu0: features 0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR>
   78:     cpu0: features 0xbfebfbff<PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX>
   79:     cpu0: features 0xbfebfbff<FXSR,SSE,SSE2,SS,HTT,TM,SBF>
   80:     cpu0: features2 0x4e33d<SSE3,DTES64,MONITOR,DS-CPL,,TM2,SSSE3,CX16,xTPR,PDCM,DCA>
   81:     cpu0: features3 0x20100800<SYSCALL/SYSRET,XD,EM64T>
   82:     cpu0: "Intel(R) Xeon(R) CPU            5130  @ 2.00GHz"
   83:     cpu0: I-cache 32KB 64B/line 8-way, D-cache 32KB 64B/line 8-way
   84:     cpu0: L2 cache 4MB 64B/line 16-way
   85:     cpu0: ITLB 128 4KB entries 4-way
   86:     cpu0: DTLB 256 4KB entries 4-way, 32 4MB entries 4-way
   87:     cpu0: Initial APIC ID 0
   88:     cpu0: Cluster/Package ID 0
   89:     cpu0: Core ID 0
   90:     cpu0: family 06 model 0f extfamily 00 extmodel 00
   91: 
   92: Depending on your CPU, the feature you are looking for is called HVM,
   93: SVM or VMX.
   94: 
   95: Next you need to copy the selected Xen kernel itself. pkgsrc installed
   96: them under `/usr/pkg/xen*-kernel/`. The file you're looking for is
   97: `xen.gz`. Copy it to your root file system. `xen-debug.gz` is a kernel
   98: with more consistency checks and more details printed on the serial
   99: console. It is useful for debugging crashing guests if you use a serial
  100: console. It is not useful with a VGA console.
  101: 
  102: You'll then need a NetBSD/Xen kernel for *domain0* on your root file
  103: system. The XEN3PAE\_DOM0 kernel or XEN3\_DOM0 provided as part of the
  104: i386 or amd64 binaries is suitable for this, but you may want to
  105: customize it. Keep your native kernel around, as it can be useful for
  106: recovery. *Note:* the *domain0* kernel must support KERNFS and `/kern`
  107: must be mounted because *xend* needs access to `/kern/xen/privcmd`.
  108: 
  109: Next you need to get a bootloader to load the `xen.gz` kernel, and the
  110: NetBSD *domain0* kernel as a module. This can be `grub` or NetBSD's boot
  111: loader. Below is a detailled example for grub, see the boot.cfg(5)
  112: manual page for an example using the latter.
  113: 
  114: This is also where you'll specify the memory allocated to *domain0*, the
  115: console to use, etc ...
  116: 
  117: Here is a commented `/grub/menu.lst` file:
  118: 
  119:     #Grub config file for NetBSD/xen. Copy as /grub/menu.lst and run
  120:     # grub-install /dev/rwd0d (assuming your boot device is wd0).
  121:     #
  122:     # The default entry to load will be the first one
  123:     default=0
  124: 
  125:     # boot the default entry after 10s if the user didn't hit keyboard
  126:     timeout=10
  127: 
  128:     # Configure serial port to use as console. Ignore if you'll use VGA only
  129:     serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
  130: 
  131:     # Let the user select which console to use (serial or VGA), default
  132:     # to serial after 10s
  133:     terminal --timeout=10 serial console
  134: 
  135:     # An entry for NetBSD/xen, using /netbsd as the domain0 kernel, and serial
  136:     # console. Domain0 will have 64MB RAM allocated.
  137:     # Assume NetBSD is installed in the first MBR partition.
  138:     title Xen 3 / NetBSD (hda0, serial)
  139:       root(hd0,0)
  140:       kernel (hd0,a)/xen.gz dom0_mem=65536 com1=115200,8n1
  141:       module (hd0,a)/netbsd bootdev=wd0a ro console=ttyS0
  142: 
  143:     # Same as above, but using VGA console
  144:     # We can use console=tty0 (Linux syntax) or console=pc (NetBSD syntax)
  145:     title Xen 3 / NetBSD (hda0, vga)
  146:       root(hd0,0)
  147:       kernel (hd0,a)/xen.gz dom0_mem=65536
  148:       module (hd0,a)/netbsd bootdev=wd0a ro console=tty0
  149: 
  150:     # NetBSD/xen using a backup domain0 kernel (in case you installed a
  151:     # nonworking kernel as /netbsd
  152:     title Xen 3 / NetBSD (hda0, backup, serial)
  153:       root(hd0,0)
  154:       kernel (hd0,a)/xen.gz dom0_mem=65536 com1=115200,8n1
  155:       module (hd0,a)/netbsd.backup bootdev=wd0a ro console=ttyS0
  156:     title Xen 3 / NetBSD (hda0, backup, VGA)
  157:       root(hd0,0)
  158:       kernel (hd0,a)/xen.gz dom0_mem=65536
  159:       module (hd0,a)/netbsd.backup bootdev=wd0a ro console=tty0
  160: 
  161:     #Load a regular NetBSD/i386 kernel. Can be useful if you end up with a
  162:     #nonworking /xen.gz
  163:     title NetBSD 5.1
  164:       root (hd0,a)
  165:       kernel --type=netbsd /netbsd-GENERIC
  166: 
  167:     #Load the NetBSD bootloader, letting it load the NetBSD/i386 kernel.
  168:     #May be better than the above, as grub can't pass all required infos
  169:     #to the NetBSD/i386 kernel (e.g. console, root device, ...)
  170:     title NetBSD chain
  171:       root        (hd0,0)
  172:       chainloader +1
  173: 
  174:     ## end of grub config file.
  175:           
  176: 
  177: Install grub with the following command:
  178: 
  179:     # grub --no-floppy
  180: 
  181:     grub> root (hd0,a)
  182:      Filesystem type is ffs, partition type 0xa9
  183: 
  184:     grub> setup (hd0)
  185:      Checking if "/boot/grub/stage1" exists... no
  186:      Checking if "/grub/stage1" exists... yes
  187:      Checking if "/grub/stage2" exists... yes
  188:      Checking if "/grub/ffs_stage1_5" exists... yes
  189:      Running "embed /grub/ffs_stage1_5 (hd0)"...  14 sectors are embedded.
  190:     succeeded
  191:      Running "install /grub/stage1 (hd0) (hd0)1+14 p (hd0,0,a)/grub/stage2 /grub/menu.lst"...
  192:      succeeded
  193:     Done.
  194:           
  195: 
  196: Creating an unprivileged NetBSD domain (DomU)
  197: ---------------------------------------------
  198: 
  199: Once you have *domain0* running, you need to start the xen tool daemon
  200: (`/usr/pkg/share/examples/rc.d/xend start`) and the xen backend daemon
  201: (`/usr/pkg/share/examples/rc.d/xenbackendd start` for Xen3\*,
  202: `/usr/pkg/share/examples/rc.d/xencommons start` for Xen4.\*). Make sure
  203: that `/dev/xencons` and `/dev/xenevt` exist before starting `xend`. You
  204: can create them with this command:
  205: 
  206:     # cd /dev && sh MAKEDEV xen
  207: 
  208: xend will write logs to `/var/log/xend.log` and
  209: `/var/log/xend-debug.log`. You can then control xen with the xm tool.
  210: 'xm list' will show something like:
  211: 
  212:     # xm list
  213:     Name              Id  Mem(MB)  CPU  State  Time(s)  Console
  214:     Domain-0           0       64    0  r----     58.1
  215: 
  216: 'xm create' allows you to create a new domain. It uses a config file in
  217: PKG\_SYSCONFDIR for its parameters. By default, this file will be in
  218: `/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which
  219: will be executed in the new domain (this kernel is in the *domain0* file
  220: system, not on the new domain virtual disk; but please note, you should
  221: install the same kernel into *domainU* as `/netbsd` in order to make
  222: your system tools, like MAN.SAVECORE.8, work). A suitable kernel is
  223: provided as part of the i386 and amd64 binary sets: XEN3\_DOMU.
  224: 
  225: Here is an /usr/pkg/etc/xen/nbsd example config file:
  226: 
  227:     #  -*- mode: python; -*-
  228:     #============================================================================
  229:     # Python defaults setup for 'xm create'.
  230:     # Edit this file to reflect the configuration of your system.
  231:     #============================================================================
  232: 
  233:     #----------------------------------------------------------------------------
  234:     # Kernel image file. This kernel will be loaded in the new domain.
  235:     kernel = "/home/bouyer/netbsd-XEN3_DOMU"
  236:     #kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"
  237: 
  238:     # Memory allocation (in megabytes) for the new domain.
  239:     memory = 128
  240: 
  241:     # A handy name for your new domain. This will appear in 'xm list',
  242:     # and you can use this as parameters for xm in place of the domain
  243:     # number. All domains must have different names.
  244:     #
  245:     name = "nbsd"
  246: 
  247:     # The number of virtual CPUs this domain has.
  248:     #
  249:     vcpus = 1
  250: 
  251:     #----------------------------------------------------------------------------
  252:     # Define network interfaces for the new domain.
  253: 
  254:     # Number of network interfaces (must be at least 1). Default is 1.
  255:     nics = 1
  256: 
  257:     # Define MAC and/or bridge for the network interfaces.
  258:     #
  259:     # The MAC address specified in ``mac'' is the one used for the interface
  260:     # in the new domain. The interface in domain0 will use this address XOR'd
  261:     # with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random
  262:     # MACs are assigned if not given.
  263:     #
  264:     # ``bridge'' is a required parameter, which will be passed to the
  265:     # vif-script called by xend(8) when a new domain is created to configure
  266:     # the new xvif interface in domain0.
  267:     #
  268:     # In this example, the xvif is added to bridge0, which should have been
  269:     # set up prior to the new domain being created -- either in the
  270:     # ``network'' script or using a /etc/ifconfig.bridge0 file.
  271:     #
  272:     vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ]
  273: 
  274:     #----------------------------------------------------------------------------
  275:     # Define the disk devices you want the domain to have access to, and
  276:     # what you want them accessible as.
  277:     #
  278:     # Each disk entry is of the form:
  279:     #
  280:     #   phy:DEV,VDEV,MODE
  281:     #
  282:     # where DEV is the device, VDEV is the device name the domain will see,
  283:     # and MODE is r for read-only, w for read-write.  You can also create
  284:     # file-backed domains using disk entries of the form:
  285:     #
  286:     #   file:PATH,VDEV,MODE
  287:     #
  288:     # where PATH is the path to the file used as the virtual disk, and VDEV
  289:     # and MODE have the same meaning as for ``phy'' devices.
  290:     #
  291:     # VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index),
  292:     # but it does for Linux.
  293:     # Worse, the device has to exist in /dev/ of domain0, because xm will
  294:     # try to stat() it. This means that in order to load a Linux guest OS
  295:     # from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ...
  296:     # on domain0, with the major/minor from Linux :(
  297:     # Alternatively it's possible to specify the device number in hex,
  298:     # e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ...
  299: 
  300:     disk = [ 'phy:/dev/wd0e,0x1,w' ]
  301:     #disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ]
  302:     #disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ]
  303: 
  304:     #----------------------------------------------------------------------------
  305:     # Set the kernel command line for the new domain.
  306: 
  307:     # Set root device. This one does matter for NetBSD
  308:     root = "xbd0"
  309:     # extra parameters passed to the kernel
  310:     # this is where you can set boot flags like -s, -a, etc ...
  311:     #extra = ""
  312: 
  313:     #----------------------------------------------------------------------------
  314:     # Set according to whether you want the domain restarted when it exits.
  315:     # The default is False.
  316:     #autorestart = True
  317: 
  318:     # end of nbsd config file ====================================================
  319: 
  320: When a new domain is created, xen calls the
  321: `/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface
  322: created in *domain0*. This can be used to automatically configure the
  323: xvif?.? interfaces in *domain0*. In our example, these will be bridged
  324: with the bridge0 device in *domain0*, but the bridge has to exist first.
  325: To do this, create the file `/etc/ifconfig.bridge0` and make it look
  326: like this:
  327: 
  328:     create
  329:     !brconfig $int add ex0 up
  330: 
  331: (replace `ex0` with the name of your physical interface). Then bridge0
  332: will be created on boot. See the MAN.BRIDGE.4 man page for details.
  333: 
  334: So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a
  335: working vif-bridge is also provided with xentools20) configuring:
  336: 
  337:     #!/bin/sh
  338:     #============================================================================
  339:     # $NetBSD: howto.mdwn,v 1.11 2014/12/23 23:25:57 gdt Exp $
  340:     #
  341:     # /usr/pkg/etc/xen/vif-bridge
  342:     #
  343:     # Script for configuring a vif in bridged mode with a dom0 interface.
  344:     # The xend(8) daemon calls a vif script when bringing a vif up or down.
  345:     # The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp
  346:     # in the ``vif-script'' field.
  347:     #
  348:     # Usage: vif-bridge up|down [var=value ...]
  349:     #
  350:     # Actions:
  351:     #    up     Adds the vif interface to the bridge.
  352:     #    down   Removes the vif interface from the bridge.
  353:     #
  354:     # Variables:
  355:     #    domain name of the domain the interface is on (required).
  356:     #    vifq   vif interface name (required).
  357:     #    mac    vif MAC address (required).
  358:     #    bridge bridge to add the vif to (required).
  359:     #
  360:     # Example invocation:
  361:     #
  362:     # vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0
  363:     #
  364:     #============================================================================
  365: 
  366:     # Exit if anything goes wrong
  367:     set -e
  368: 
  369:     echo "vif-bridge $*"
  370: 
  371:     # Operation name.
  372:     OP=$1; shift
  373: 
  374:     # Pull variables in args into environment
  375:     for arg ; do export "${arg}" ; done
  376: 
  377:     # Required parameters. Fail if not set.
  378:     domain=${domain:?}
  379:     vif=${vif:?}
  380:     mac=${mac:?}
  381:     bridge=${bridge:?}
  382: 
  383:     # Optional parameters. Set defaults.
  384:     ip=${ip:-''}   # default to null (do nothing)
  385: 
  386:     # Are we going up or down?
  387:     case $OP in
  388:     up) brcmd='add' ;;
  389:     down)   brcmd='delete' ;;
  390:     *)
  391:         echo 'Invalid command: ' $OP
  392:         echo 'Valid commands are: up, down'
  393:         exit 1
  394:         ;;
  395:     esac
  396: 
  397:     # Don't do anything if the bridge is "null".
  398:     if [ "${bridge}" = "null" ] ; then
  399:         exit
  400:     fi
  401: 
  402:     # Don't do anything if the bridge doesn't exist.
  403:     if ! ifconfig -l | grep "${bridge}" >/dev/null; then
  404:         exit
  405:     fi
  406: 
  407:     # Add/remove vif to/from bridge.
  408:     ifconfig x${vif} $OP
  409:     brconfig ${bridge} ${brcmd} x${vif}
  410: 
  411: Now, running
  412: 
  413:     xm create -c /usr/pkg/etc/xen/nbsd
  414: 
  415: should create a domain and load a NetBSD kernel in it. (Note: `-c`
  416: causes xm to connect to the domain's console once created.) The kernel
  417: will try to find its root file system on xbd0 (i.e., wd0e) which hasn't
  418: been created yet. wd0e will be seen as a disk device in the new domain,
  419: so it will be 'sub-partitioned'. We could attach a ccd to wd0e in
  420: *domain0* and partition it, newfs and extract the NetBSD/i386 or amd64
  421: tarballs there, but there's an easier way: load the
  422: `netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets.
  423: Like other install kernels, it contains a ramdisk with sysinst, so you
  424: can install NetBSD using sysinst on your new domain.
  425: 
  426: If you want to install NetBSD/Xen with a CDROM image, the following line
  427: should be used in the `/usr/pkg/etc/xen/nbsd` file:
  428: 
  429:     disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]
  430: 
  431: After booting the domain, the option to install via CDROM may be
  432: selected. The CDROM device should be changed to `xbd1d`.
  433: 
  434: Once done installing, `halt -p` the new domain (don't reboot or halt, it
  435: would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the
  436: config file), switch the config file back to the XEN3\_DOMU kernel, and
  437: start the new domain again. Now it should be able to use `root on xbd0a`
  438: and you should have a second, functional NetBSD system on your xen
  439: installation.
  440: 
  441: When the new domain is booting you'll see some warnings about *wscons*
  442: and the pseudo-terminals. These can be fixed by editing the files
  443: `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in
  444: `/etc/ttys`, except *console*, like this:
  445: 
  446:     console "/usr/libexec/getty Pc"         vt100   on secure
  447:     ttyE0   "/usr/libexec/getty Pc"         vt220   off secure
  448:     ttyE1   "/usr/libexec/getty Pc"         vt220   off secure
  449:     ttyE2   "/usr/libexec/getty Pc"         vt220   off secure
  450:     ttyE3   "/usr/libexec/getty Pc"         vt220   off secure
  451: 
  452: Finally, all screens must be commented out from `/etc/wscons.conf`.
  453: 
  454: It is also desirable to add
  455: 
  456:     powerd=YES
  457: 
  458: in rc.conf. This way, the domain will be properly shut down if
  459: `xm shutdown -R` or `xm shutdown -H` is used on the domain0.
  460: 
  461: Your domain should be now ready to work, enjoy.
  462: 
  463: Creating an unprivileged Linux domain (DomU)
  464: --------------------------------------------
  465: 
  466: Creating unprivileged Linux domains isn't much different from
  467: unprivileged NetBSD domains, but there are some details to know.
  468: 
  469: First, the second parameter passed to the disk declaration (the '0x1' in
  470: the example below)
  471: 
  472:     disk = [ 'phy:/dev/wd0e,0x1,w' ]
  473: 
  474: does matter to Linux. It wants a Linux device number here (e.g. 0x300
  475: for hda). Linux builds device numbers as: (major \<\< 8 + minor). So,
  476: hda1 which has major 3 and minor 1 on a Linux system will have device
  477: number 0x301. Alternatively, devices names can be used (hda, hdb, ...)
  478: as xentools has a table to map these names to devices numbers. To export
  479: a partition to a Linux guest we can use:
  480: 
  481:     disk = [ 'phy:/dev/wd0e,0x300,w' ]
  482:     root = "/dev/hda1 ro"
  483: 
  484: and it will appear as /dev/hda on the Linux system, and be used as root
  485: partition.
  486: 
  487: To install the Linux system on the partition to be exported to the guest
  488: domain, the following method can be used: install sysutils/e2fsprogs
  489: from pkgsrc. Use mke2fs to format the partition that will be the root
  490: partition of your Linux domain, and mount it. Then copy the files from a
  491: working Linux system, make adjustments in `/etc` (fstab, network
  492: config). It should also be possible to extract binary packages such as
  493: .rpm or .deb directly to the mounted partition using the appropriate
  494: tool, possibly running under NetBSD's Linux emulation. Once the
  495: filesystem has been populated, umount it. If desirable, the filesystem
  496: can be converted to ext3 using tune2fs -j. It should now be possible to
  497: boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels
  498: available in the Xen binary distribution.
  499: 
  500: To get the linux console right, you need to add:
  501: 
  502:     extra = "xencons=tty1"
  503: 
  504: to your configuration since not all linux distributions auto-attach a
  505: tty to the xen console.
  506: 
  507: Creating an unprivileged Solaris domain (DomU)
  508: ----------------------------------------------
  509: 
  510: Download an Opensolaris [release](http://opensolaris.org/os/downloads/)
  511: or [development snapshot](http://genunix.org/) DVD image. Attach the DVD
  512: image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem
  513: image to your dom0 filesystem.
  514: 
  515:     dom0# mkdir /root/solaris
  516:     dom0# vnconfig vnd0 osol-1002-124-x86.iso
  517:     dom0# mount /dev/vnd0a /mnt
  518: 
  519:     ## for a 64-bit guest
  520:     dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris
  521:     dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris
  522: 
  523:     ## for a 32-bit guest
  524:     dom0# cp /mnt/boot/x86.microroot /root/solaris
  525:     dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris
  526: 
  527:     dom0# umount /mnt
  528:           
  529: 
  530: Keep the MAN.VND.4 configured. For some reason the boot process stalls
  531: unless the DVD image is attached to the guest as a "phy" device. Create
  532: an initial configuration file with the following contents. Substitute
  533: */dev/wd0k* with an empty partition at least 8 GB large.
  534: 
  535:     memory = 640
  536:     name = 'solaris'
  537:     disk = [ 'phy:/dev/wd0k,0,w' ]
  538:     disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ]
  539:     vif = [ 'bridge=bridge0' ]
  540:     kernel = '/root/solaris/unix'
  541:     ramdisk = '/root/solaris/x86.microroot'
  542:     # for a 64-bit guest
  543:     extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom'
  544:     # for a 32-bit guest
  545:     #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom'
  546:           
  547: 
  548: Start the guest.
  549: 
  550:     dom0# xm create -c solaris.cfg
  551:     Started domain solaris
  552:                           v3.3.2 chgset 'unavailable'
  553:     SunOS Release 5.11 Version snv_124 64-bit
  554:     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
  555:     Use is subject to license terms.
  556:     Hostname: opensolaris
  557:     Remounting root read/write
  558:     Probing for device nodes ...
  559:     WARNING: emlxs: ddi_modopen drv/fct failed: err 2
  560:     Preparing live image for use
  561:     Done mounting Live image
  562:           
  563: 
  564: Make sure the network is configured. Note that it can take a minute for
  565: the xnf0 interface to appear.
  566: 
  567:     opensolaris console login: jack
  568:     Password: jack
  569:     Sun Microsystems Inc.   SunOS 5.11      snv_124 November 2008
  570:     jack@opensolaris:~$ pfexec sh
  571:     sh-3.2# ifconfig -a
  572:     sh-3.2# exit
  573:           
  574: 
  575: Set a password for VNC and start the VNC server which provides the X11
  576: display where the installation program runs.
  577: 
  578:     jack@opensolaris:~$ vncpasswd
  579:     Password: solaris
  580:     Verify: solaris
  581:     jack@opensolaris:~$ cp .Xclients .vnc/xstartup
  582:     jack@opensolaris:~$ vncserver :1
  583:           
  584: 
  585: From a remote machine connect to the VNC server. Use `ifconfig xnf0` on
  586: the guest to find the correct IP address to use.
  587: 
  588:     remote$ vncviewer 172.18.2.99:1
  589:           
  590: 
  591: It is also possible to launch the installation on a remote X11 display.
  592: 
  593:     jack@opensolaris:~$ export DISPLAY=172.18.1.1:0
  594:     jack@opensolaris:~$ pfexec gui-install
  595:            
  596: 
  597: After the GUI installation is complete you will be asked to reboot.
  598: Before that you need to determine the ZFS ID for the new boot filesystem
  599: and update the configuration file accordingly. Return to the guest
  600: console.
  601: 
  602:     jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs
  603:                     bootfs = 43
  604:     ^C
  605:     jack@opensolaris:~$
  606:            
  607: 
  608: The final configuration file should look like this. Note in particular
  609: the last line.
  610: 
  611:     memory = 640
  612:     name = 'solaris'
  613:     disk = [ 'phy:/dev/wd0k,0,w' ]
  614:     vif = [ 'bridge=bridge0' ]
  615:     kernel = '/root/solaris/unix'
  616:     ramdisk = '/root/solaris/x86.microroot'
  617:     extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"'
  618:            
  619: 
  620: Restart the guest to verify it works correctly.
  621: 
  622:     dom0# xm destroy solaris
  623:     dom0# xm create -c solaris.cfg
  624:     Using config file "./solaris.cfg".
  625:     v3.3.2 chgset 'unavailable'
  626:     Started domain solaris
  627:     SunOS Release 5.11 Version snv_124 64-bit
  628:     Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
  629:     Use is subject to license terms.
  630:     WARNING: emlxs: ddi_modopen drv/fct failed: err 2
  631:     Hostname: osol
  632:     Configuring devices.
  633:     Loading smf(5) service descriptions: 160/160
  634:     svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .
  635:     Reading ZFS config: done.
  636:     Mounting ZFS filesystems: (6/6)
  637:     Creating new rsa public/private host key pair
  638:     Creating new dsa public/private host key pair
  639: 
  640:     osol console login:
  641:            
  642: 
  643: Using PCI devices in guest domains
  644: ==================================
  645: 
  646: The domain0 can give other domains access to selected PCI devices. This
  647: can allow, for example, a non-privileged domain to have access to a
  648: physical network interface or disk controller. However, keep in mind
  649: that giving a domain access to a PCI device most likely will give the
  650: domain read/write access to the whole physical memory, as PCs don't have
  651: an IOMMU to restrict memory access to DMA-capable device. Also, it's not
  652: possible to export ISA devices to non-domain0 domains (which means that
  653: the primary VGA adapter can't be exported. A guest domain trying to
  654: access the VGA registers will panic).
  655: 
  656: This functionality is only available in NetBSD-5.1 (and later) domain0
  657: and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as
  658: support has not been ported to later versions at this time.
  659: 
  660: For a PCI device to be exported to a domU, is has to be attached to the
  661: `pciback` driver in domain0. Devices passed to the domain0 via the
  662: pciback.hide boot parameter will attach to `pciback` instead of the
  663: usual driver. The list of devices is specified as `(bus:dev.func)`,
  664: where bus and dev are 2-digit hexadecimal numbers, and func a
  665: single-digit number:
  666: 
  667:     pciback.hide=(00:0a.0)(00:06.0)
  668: 
  669: pciback devices should show up in the domain0's boot messages, and the
  670: devices should be listed in the `/kern/xen/pci` directory.
  671: 
  672: PCI devices to be exported to a domU are listed in the `pci` array of
  673: the domU's config file, with the format `'0000:bus:dev.func'`
  674: 
  675:     pci = [ '0000:00:06.0', '0000:00:0a.0' ]
  676: 
  677: In the domU an `xpci` device will show up, to which one or more pci
  678: busses will attach. Then the PCI drivers will attach to PCI busses as
  679: usual. Note that the default NetBSD DOMU kernels do not have `xpci` or
  680: any PCI drivers built in by default; you have to build your own kernel
  681: to use PCI devices in a domU. Here's a kernel config example:
  682: 
  683:     include         "arch/i386/conf/XEN3_DOMU"
  684:     #include         "arch/i386/conf/XENU"           # in NetBSD 3.0
  685: 
  686:     # Add support for PCI busses to the XEN3_DOMU kernel
  687:     xpci* at xenbus ?
  688:     pci* at xpci ?
  689: 
  690:     # Now add PCI and related devices to be used by this domain
  691:     # USB Controller and Devices
  692: 
  693:     # PCI USB controllers
  694:     uhci*   at pci? dev ? function ?        # Universal Host Controller (Intel)
  695: 
  696:     # USB bus support
  697:     usb*    at uhci?
  698: 
  699:     # USB Hubs
  700:     uhub*   at usb?
  701:     uhub*   at uhub? port ? configuration ? interface ?
  702: 
  703:     # USB Mass Storage
  704:     umass*  at uhub? port ? configuration ? interface ?
  705:     wd*     at umass?
  706:     # SCSI controllers
  707:     ahc*    at pci? dev ? function ?        # Adaptec [23]94x, aic78x0 SCSI
  708: 
  709:     # SCSI bus support (for both ahc and umass)
  710:     scsibus* at scsi?
  711: 
  712:     # SCSI devices
  713:     sd*     at scsibus? target ? lun ?      # SCSI disk drives
  714:     cd*     at scsibus? target ? lun ?      # SCSI CD-ROM drives
  715: 
  716: Links and further information
  717: =============================
  718: 
  719: -   The [HowTo on Installing into RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html)
  720:     explains how to set up booting a dom0 with Xen using grub 
  721:     with NetBSD's RAIDframe.  (This is obsolete with the use of
  722:     NetBSD's native boot.)
  723: -   An example of how to use NetBSD's native bootloader to load
  724:     NetBSD/Xen instead of Grub can be found in the i386/amd64 boot(8)
  725:     and boot.cfg(5) manpages.

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb