--- wikisrc/ports/xen/howto.mdwn 2013/11/04 02:43:35 1.7 +++ wikisrc/ports/xen/howto.mdwn 2020/04/03 10:48:35 1.168 @@ -1,461 +1,486 @@ - - - - - - - -

Table Of Contents

-
    -
  • Introduction
  • -
  • Installing NetBSD as privileged domain (Dom0)
  • -
  • Creating an unprivileged NetBSD domain (DomU)
  • -
  • Creating an unprivileged Linux domain (DomU)
  • -
  • Creating an unprivileged Solaris domain (DomU)
  • -
  • Using PCI devices in guest domains
  • -
  • Links and further information
  • -
+[[!meta title="Xen HowTo"]] -Introduction +Xen is a Type 1 hypervisor which supports running multiple guest operating +systems on a single physical machine. One uses the Xen kernel to control the +CPU, memory and console, a dom0 operating system which mediates access to +other hardware (e.g., disks, network, USB), and one or more domU operating +systems which operate in an unprivileged virtualized environment. IO requests +from the domU systems are forwarded by the Xen hypervisor to the dom0 to be +fulfilled. + +Xen supports different styles of guest: + +[[!table data=""" +Style of guest |Supported by NetBSD +PV |Yes (dom0, domU) +HVM |Yes (domU) +PVHVM |No +PVH |No +"""]] + +In Para-Virtualized (PV) mode, the guest OS does not attempt to access +hardware directly, but instead makes hypercalls to the hypervisor; PV +guests must be specifically coded for Xen. In HVM mode, no guest +modification is required; however, hardware support is required, such +as VT-x on Intel CPUs and SVM on AMD CPUs. + +At boot, the dom0 kernel is loaded as a module with Xen as the kernel. +The dom0 can start one or more domUs. (Booting is explained in detail +in the dom0 section.) + +This HOWTO presumes a basic familiarity with the Xen system +architecture, with installing NetBSD on i386/amd64 hardware, and with +installing software from pkgsrc. See also the [Xen +website](http://www.xenproject.org/). + +[[!toc]] + +#Versions and Support + +In NetBSD, Xen is provided in pkgsrc, via matching pairs of packages +xenkernel and xentools. We will refer only to the kernel versions, +but note that both packages must be installed together and must have +matching versions. + +Versions available in pkgsrc: + +[[!table data=""" +Xen Version |Package Name |Xen CPU Support |xm? |EOL'ed By Upstream +4.2 |xenkernel42 |i386 x86_64 |yes |Yes +4.5 |xenkernel45 |x86_64 | |Yes +4.6 |xenkernel46 |x86_64 | |Yes +4.8 |xenkernel48 |x86_64 | |Yes +4.11 |xenkernel411 |x86_64 | |No +"""]] + +See also the [Xen Security Advisory page](http://xenbits.xen.org/xsa/). + +Multiprocessor (SMP) support in NetBSD differs depending on the domain: + +[[!table data=""" +Domain |Supports SMP +dom0 |No +domU |Yes +"""]] + +Note: NetBSD support is called XEN3. However, it does support Xen 4, +because the hypercall interface has remained identical. + +Older Xen had a python-based management tool called xm, now replaced +by xl. xm is obsolete, but 4.2 remains in pkgsrc because migrating +from xm to xl is not always trivial, and because 4.2 is the last +version to run on an i386 dom0. + +Architecture ------------ -[![[Xen -screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png) +Xen 4.5 and later runs on x86_64 hardware (the NetBSD amd64 port). +Xen 4.2 can in theory use i386 hardware, but we do not have +recent reports of success. + +The dom0 system, plus each domU, can be either i386PAE or amd64. +i386 without PAE is not supported. + +The standard approach is to use NetBSD/amd64 for the dom0. + +To use an i386PAE dom0 (other than on 4.2), one must build or obtain a +64bit Xen kernel and install it on the system. + +For domUs, i386PAE is considered as +[faster](https://lists.xen.org/archives/html/xen-devel/2012-07/msg00085.html) +than amd64. + +# Creating a dom0 + +In order to install a NetBSD as a dom0, one must first install a normal +NetBSD system, and then pivot the install to a dom0 install by changing +the kernel and boot configuration. + +In 2018-05, trouble booting a dom0 was reported with 256M of RAM: with +512M it worked reliably. This does not make sense, but if you see +"not ELF" after Xen boots, try increasing dom0 RAM. + +Installation of NetBSD +---------------------- + +[Install NetBSD/amd64](/guide/inst/) +just as you would if you were not using Xen. + +Installation of Xen +------------------- + +We will consider that you chose to use Xen 4.8, with NetBSD/amd64 as +dom0. In the dom0, install xenkernel48 and xentools48 from pkgsrc. + +Once this is done, install the Xen kernel itself: + +[[!template id=programlisting text=""" +# cp /usr/pkg/xen48-kernel/xen.gz / +"""]] + +Then, place a NetBSD XEN3_DOM0 kernel in the `/` directory. Such kernel +can either be compiled manually, or downloaded from the NetBSD FTP, for +example at: + +[[!template id=programlisting text=""" +ftp.netbsd.org/pub/NetBSD/NetBSD-8.0/amd64/binary/kernel/netbsd-XEN3_DOM0.gz +"""]] + +Add a line to /boot.cfg to boot Xen: + +[[!template id=filecontent name="/boot.cfg" text=""" +menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=512M +"""]] + +This specifies that the dom0 should have 512MB of ram, leaving the rest +to be allocated for domUs. To use a serial console, use: + +[[!template id=filecontent name="/boot.cfg" text=""" +menu=Xen:load /netbsd-XEN3_DOM0.gz;multiboot /xen.gz dom0_mem=512M console=com1 com1=9600,8n1 +"""]] + +which will use the first serial port for Xen (which counts starting +from 1, unlike NetBSD which counts starting from 0), forcing +speed/parity. Because the NetBSD command line lacks a +"console=pc" argument, it will use the default "xencons" console device, +which directs the console I/O through Xen to the same console device Xen +itself uses (in this case, the serial port). + +In an attempt to add performance, one can also add `dom0_max_vcpus=1 dom0_vcpus_pin`, +to force only one vcpu to be provided (since NetBSD dom0 can't use +more) and to pin that vcpu to a physical CPU. Xen has +[many boot options](http://xenbits.xenproject.org/docs/4.8-testing/misc/xen-command-line.html), +and other than dom0 memory and max_vcpus, they are generally not +necessary. + +Copy the boot scripts into `/etc/rc.d`: + +[[!template id=programlisting text=""" +# cp /usr/pkg/share/examples/rc.d/xen* /etc/rc.d/ +"""]] + +Enable `xencommons`: + +[[!template id=filecontent name="/etc/rc.conf" text=""" +xencommons=YES +"""]] + +Now, reboot so that you are running a DOM0 kernel under Xen, rather +than GENERIC without Xen. + +TODO: Recommend for/against xen-watchdog. + +Once the reboot is done, use `xl` to inspect Xen's boot messages, +available resources, and running domains. For example: + +[[!template id=programlisting text=""" +# xl dmesg +... xen's boot info ... +# xl info +... available memory, etc ... +# xl list +Name Id Mem(MB) CPU State Time(s) Console +Domain-0 0 64 0 r---- 58.1 +"""]] + +Xen logs will be in /var/log/xen. + +### Issues with xencommons + +`xencommons` starts `xenstored`, which stores data on behalf of dom0 and +domUs. It does not currently work to stop and start xenstored. +Certainly all domUs should be shutdown first, following the sort order +of the rc.d scripts. However, the dom0 sets up state with xenstored, +and is not notified when xenstored exits, leading to not recreating +the state when the new xenstored starts. Until there's a mechanism to +make this work, one should not expect to be able to restart xenstored +(and thus xencommons). There is currently no reason to expect that +this will get fixed any time soon. + +anita (for testing NetBSD) +-------------------------- + +With the setup so far (assuming 4.8/xl), one should be able to run +anita (see pkgsrc/misc/py-anita) to test NetBSD releases, by doing (as +root, because anita must create a domU): + +[[!template id=programlisting text=""" +anita --vmm=xl test file:///usr/obj/i386/ +"""]] + +Xen-specific NetBSD issues +-------------------------- + +There are (at least) two additional things different about NetBSD as a +dom0 kernel compared to hardware. + +One is that the module ABI is different because some of the #defines +change, so one must build modules for Xen. As of netbsd-7, the build +system does this automatically. + +The other difference is that XEN3_DOM0 does not have exactly the same +options as GENERIC. While it is debatable whether or not this is a +bug, users should be aware of this and can simply add missing config +items if desired. + +Updating NetBSD in a dom0 +------------------------- + +This is just like updating NetBSD on bare hardware, assuming the new +version supports the version of Xen you are running. Generally, one +replaces the kernel and reboots, and then overlays userland binaries +and adjusts `/etc`. + +Note that one must update both the non-Xen kernel typically used for +rescue purposes and the DOM0 kernel used with Xen. + +Converting from grub to /boot +----------------------------- + +These instructions were used to convert a system from +grub to /boot. The system was originally installed in February of +2006 with a RAID1 setup and grub to boot Xen 2, and has been updated +over time. Before these commands, it was running NetBSD 6 i386, Xen +4.1 and grub, much like the message linked earlier in the grub +section. + +[[!template id=programlisting text=""" +# Install MBR bootblocks on both disks. +fdisk -i /dev/rwd0d +fdisk -i /dev/rwd1d +# Install NetBSD primary boot loader (/ is FFSv1) into RAID1 components. +installboot -v /dev/rwd0d /usr/mdec/bootxx_ffsv1 +installboot -v /dev/rwd1d /usr/mdec/bootxx_ffsv1 +# Install secondary boot loader +cp -p /usr/mdec/boot / +# Create boot.cfg following earlier guidance: +menu=Xen:load /netbsd-XEN3PAE_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=512M +menu=Xen.ok:load /netbsd-XEN3PAE_DOM0.ok.gz console=pc;multiboot /xen.ok.gz dom0_mem=512M +menu=GENERIC:boot +menu=GENERIC single-user:boot -s +menu=GENERIC.ok:boot netbsd.ok +menu=GENERIC.ok single-user:boot netbsd.ok -s +menu=Drop to boot prompt:prompt +default=1 +timeout=30 +"""]] + +Upgrading Xen versions +--------------------- + +Minor version upgrades are trivial. Just rebuild/replace the +xenkernel version and copy the new xen.gz to `/` (where `/boot.cfg` +references it), and reboot. + +#Unprivileged domains (domU) + +This section describes general concepts about domUs. It does not +address specific domU operating systems or how to install them. The +config files for domUs are typically in `/usr/pkg/etc/xen`, and are +typically named so that the file name, domU name and the domU's host +name match. + +The domU is provided with CPU and memory by Xen, configured by the +dom0. The domU is provided with disk and network by the dom0, +mediated by Xen, and configured in the dom0. -Xen is a virtual machine monitor for x86 hardware (requires i686-class -CPUs), which supports running multiple guest operating systems on a -single machine. Guest OSes (also called “domains”) require a modified -kernel which supports Xen hypercalls in replacement to access to the -physical hardware. At boot, the Xen kernel (also known as the Xen -hypervisor) is loaded (via the bootloader) along with the guest kernel -for the first domain (called *domain0*). The Xen kernel has to be loaded -using the multiboot protocol. You would use the NetBSD boot loader for -this, or alternatively the `grub` boot loader (`grub` has some -limitations, detailed below). *domain0* has special privileges to access -the physical hardware (PCI and ISA devices), administrate other domains -and provide virtual devices (disks and network) to other domains that -lack those privileges. For more details, see [](http://www.xen.org/). - -NetBSD can be used for both *domain0 (Dom0)* and further, unprivileged -(DomU) domains. (Actually there can be multiple privileged domains -accessing different parts of the hardware, all providing virtual devices -to unprivileged domains. We will only talk about the case of a single -privileged domain, *domain0*). *domain0* will see physical devices much -like a regular i386 or amd64 kernel, and will own the physical console -(VGA or serial). Unprivileged domains will only see a character-only -virtual console, virtual disks (`xbd`) and virtual network interfaces -(`xennet`) provided by a privileged domain (usually *domain0*). xbd -devices are connected to a block device (i.e., a partition of a disk, -raid, ccd, ... device) in the privileged domain. xennet devices are -connected to virtual devices in the privileged domain, named -xvif\.\, e.g., xvif1.0. Both -xennet and xvif devices are seen as regular Ethernet devices (they can -be seen as a crossover cable between 2 PCs) and can be assigned -addresses (and be routed or NATed, filtered using IPF, etc ...) or be -added as part of a bridge. - -Installing NetBSD as privileged domain (Dom0) ---------------------------------------------- - -First do a NetBSD/i386 or NetBSD/amd64 -[installation](../../docs/guide/en/chap-inst.html) of the 5.1 release -(or newer) as you usually do on x86 hardware. The binary releases are -available from [](ftp://ftp.NetBSD.org/pub/NetBSD/). Binary snapshots -for current and the stable branches are available on daily autobuilds. -If you plan to use the `grub` boot loader, when partitioning the disk -you have to make the root partition smaller than 512Mb, and formatted as -FFSv1 with 8k block/1k fragments. If the partition is larger than this, -uses FFSv2 or has different block/fragment sizes, grub may fail to load -some files. Also keep in mind that you'll probably want to provide -virtual disks to other domains, so reserve some partitions for these -virtual disks. Alternatively, you can create large files in the file -system, map them to vnd(4) devices and export theses vnd devices to -other domains. - -Next step is to install the Xen packages via pkgsrc or from binary -packages. See [the pkgsrc -documentation](http://www.NetBSD.org/docs/pkgsrc/) if you are unfamiliar -with pkgsrc and/or handling of binary packages. Xen 3.1, 3.3, 4.1 and -4.2 are available. 3.1 supports PCI pass-through while other versions do -not. You'll need either `sysutils/xentools3` and `sysutils/xenkernel3` -for Xen 3.1, `sysutils/xentools33` and `sysutils/xenkernel33` for Xen -3.3, `sysutils/xentools41` and `sysutils/xenkernel41` for Xen 4.1. or -`sysutils/xentools42` and `sysutils/xenkernel42` for Xen 4.2. You'll -also need `sysutils/grub` if you plan do use the grub boot loader. If -using Xen 3.1, you may also want to install `sysutils/xentools3-hvm` -which contains the utilities to run unmodified guests OSes using the -*HVM* support (for later versions this is included in -`sysutils/xentools`). Note that your CPU needs to support this. Intel -CPUs must have the 'VT' instruction, AMD CPUs the 'SVM' instruction. You -can easily find out if your CPU support HVM by using NetBSD's cpuctl -command: - - # cpuctl identify 0 - cpu0: Intel Core 2 (Merom) (686-class), id 0x6f6 - cpu0: features 0xbfebfbff - cpu0: features 0xbfebfbff - cpu0: features 0xbfebfbff - cpu0: features2 0x4e33d - cpu0: features3 0x20100800 - cpu0: "Intel(R) Xeon(R) CPU 5130 @ 2.00GHz" - cpu0: I-cache 32KB 64B/line 8-way, D-cache 32KB 64B/line 8-way - cpu0: L2 cache 4MB 64B/line 16-way - cpu0: ITLB 128 4KB entries 4-way - cpu0: DTLB 256 4KB entries 4-way, 32 4MB entries 4-way - cpu0: Initial APIC ID 0 - cpu0: Cluster/Package ID 0 - cpu0: Core ID 0 - cpu0: family 06 model 0f extfamily 00 extmodel 00 - -Depending on your CPU, the feature you are looking for is called HVM, -SVM or VMX. - -Next you need to copy the selected Xen kernel itself. pkgsrc installed -them under `/usr/pkg/xen*-kernel/`. The file you're looking for is -`xen.gz`. Copy it to your root file system. `xen-debug.gz` is a kernel -with more consistency checks and more details printed on the serial -console. It is useful for debugging crashing guests if you use a serial -console. It is not useful with a VGA console. - -You'll then need a NetBSD/Xen kernel for *domain0* on your root file -system. The XEN3PAE\_DOM0 kernel or XEN3\_DOM0 provided as part of the -i386 or amd64 binaries is suitable for this, but you may want to -customize it. Keep your native kernel around, as it can be useful for -recovery. *Note:* the *domain0* kernel must support KERNFS and `/kern` -must be mounted because *xend* needs access to `/kern/xen/privcmd`. - -Next you need to get a bootloader to load the `xen.gz` kernel, and the -NetBSD *domain0* kernel as a module. This can be `grub` or NetBSD's boot -loader. Below is a detailled example for grub, see the boot.cfg(5) -manual page for an example using the latter. - -This is also where you'll specify the memory allocated to *domain0*, the -console to use, etc ... - -Here is a commented `/grub/menu.lst` file: - - #Grub config file for NetBSD/xen. Copy as /grub/menu.lst and run - # grub-install /dev/rwd0d (assuming your boot device is wd0). - # - # The default entry to load will be the first one - default=0 - - # boot the default entry after 10s if the user didn't hit keyboard - timeout=10 - - # Configure serial port to use as console. Ignore if you'll use VGA only - serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 - - # Let the user select which console to use (serial or VGA), default - # to serial after 10s - terminal --timeout=10 serial console - - # An entry for NetBSD/xen, using /netbsd as the domain0 kernel, and serial - # console. Domain0 will have 64MB RAM allocated. - # Assume NetBSD is installed in the first MBR partition. - title Xen 3 / NetBSD (hda0, serial) - root(hd0,0) - kernel (hd0,a)/xen.gz dom0_mem=65536 com1=115200,8n1 - module (hd0,a)/netbsd bootdev=wd0a ro console=ttyS0 - - # Same as above, but using VGA console - # We can use console=tty0 (Linux syntax) or console=pc (NetBSD syntax) - title Xen 3 / NetBSD (hda0, vga) - root(hd0,0) - kernel (hd0,a)/xen.gz dom0_mem=65536 - module (hd0,a)/netbsd bootdev=wd0a ro console=tty0 - - # NetBSD/xen using a backup domain0 kernel (in case you installed a - # nonworking kernel as /netbsd - title Xen 3 / NetBSD (hda0, backup, serial) - root(hd0,0) - kernel (hd0,a)/xen.gz dom0_mem=65536 com1=115200,8n1 - module (hd0,a)/netbsd.backup bootdev=wd0a ro console=ttyS0 - title Xen 3 / NetBSD (hda0, backup, VGA) - root(hd0,0) - kernel (hd0,a)/xen.gz dom0_mem=65536 - module (hd0,a)/netbsd.backup bootdev=wd0a ro console=tty0 - - #Load a regular NetBSD/i386 kernel. Can be useful if you end up with a - #nonworking /xen.gz - title NetBSD 5.1 - root (hd0,a) - kernel --type=netbsd /netbsd-GENERIC - - #Load the NetBSD bootloader, letting it load the NetBSD/i386 kernel. - #May be better than the above, as grub can't pass all required infos - #to the NetBSD/i386 kernel (e.g. console, root device, ...) - title NetBSD chain - root (hd0,0) - chainloader +1 - - ## end of grub config file. - - -Install grub with the following command: - - # grub --no-floppy - - grub> root (hd0,a) - Filesystem type is ffs, partition type 0xa9 - - grub> setup (hd0) - Checking if "/boot/grub/stage1" exists... no - Checking if "/grub/stage1" exists... yes - Checking if "/grub/stage2" exists... yes - Checking if "/grub/ffs_stage1_5" exists... yes - Running "embed /grub/ffs_stage1_5 (hd0)"... 14 sectors are embedded. - succeeded - Running "install /grub/stage1 (hd0) (hd0)1+14 p (hd0,0,a)/grub/stage2 /grub/menu.lst"... - succeeded - Done. - - -Creating an unprivileged NetBSD domain (DomU) ---------------------------------------------- - -Once you have *domain0* running, you need to start the xen tool daemon -(`/usr/pkg/share/examples/rc.d/xend start`) and the xen backend daemon -(`/usr/pkg/share/examples/rc.d/xenbackendd start` for Xen3\*, -`/usr/pkg/share/examples/rc.d/xencommons start` for Xen4.\*). Make sure -that `/dev/xencons` and `/dev/xenevt` exist before starting `xend`. You -can create them with this command: - - # cd /dev && sh MAKEDEV xen - -xend will write logs to `/var/log/xend.log` and -`/var/log/xend-debug.log`. You can then control xen with the xm tool. -'xm list' will show something like: - - # xm list - Name Id Mem(MB) CPU State Time(s) Console - Domain-0 0 64 0 r---- 58.1 - -'xm create' allows you to create a new domain. It uses a config file in -PKG\_SYSCONFDIR for its parameters. By default, this file will be in -`/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which -will be executed in the new domain (this kernel is in the *domain0* file -system, not on the new domain virtual disk; but please note, you should -install the same kernel into *domainU* as `/netbsd` in order to make -your system tools, like MAN.SAVECORE.8, work). A suitable kernel is -provided as part of the i386 and amd64 binary sets: XEN3\_DOMU. - -Here is an /usr/pkg/etc/xen/nbsd example config file: - - # -*- mode: python; -*- - #============================================================================ - # Python defaults setup for 'xm create'. - # Edit this file to reflect the configuration of your system. - #============================================================================ - - #---------------------------------------------------------------------------- - # Kernel image file. This kernel will be loaded in the new domain. - kernel = "/home/bouyer/netbsd-XEN3_DOMU" - #kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU" - - # Memory allocation (in megabytes) for the new domain. - memory = 128 - - # A handy name for your new domain. This will appear in 'xm list', - # and you can use this as parameters for xm in place of the domain - # number. All domains must have different names. - # - name = "nbsd" - - # The number of virtual CPUs this domain has. - # - vcpus = 1 - - #---------------------------------------------------------------------------- - # Define network interfaces for the new domain. - - # Number of network interfaces (must be at least 1). Default is 1. - nics = 1 - - # Define MAC and/or bridge for the network interfaces. - # - # The MAC address specified in ``mac'' is the one used for the interface - # in the new domain. The interface in domain0 will use this address XOR'd - # with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random - # MACs are assigned if not given. - # - # ``bridge'' is a required parameter, which will be passed to the - # vif-script called by xend(8) when a new domain is created to configure - # the new xvif interface in domain0. - # - # In this example, the xvif is added to bridge0, which should have been - # set up prior to the new domain being created -- either in the - # ``network'' script or using a /etc/ifconfig.bridge0 file. - # - vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ] - - #---------------------------------------------------------------------------- - # Define the disk devices you want the domain to have access to, and - # what you want them accessible as. - # - # Each disk entry is of the form: - # - # phy:DEV,VDEV,MODE - # - # where DEV is the device, VDEV is the device name the domain will see, - # and MODE is r for read-only, w for read-write. You can also create - # file-backed domains using disk entries of the form: - # - # file:PATH,VDEV,MODE - # - # where PATH is the path to the file used as the virtual disk, and VDEV - # and MODE have the same meaning as for ``phy'' devices. - # - # VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index), - # but it does for Linux. - # Worse, the device has to exist in /dev/ of domain0, because xm will - # try to stat() it. This means that in order to load a Linux guest OS - # from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ... - # on domain0, with the major/minor from Linux :( - # Alternatively it's possible to specify the device number in hex, - # e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ... +Entropy in domUs can be an issue; physical disks and network are on +the dom0. NetBSD's /dev/random system works, but is often challenged. - disk = [ 'phy:/dev/wd0e,0x1,w' ] - #disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ] - #disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ] +Config files +------------ + +See /usr/pkg/share/examples/xen/xlexample* +for a small number of well-commented examples, mostly for running +GNU/Linux. + +The following is an example minimal domain configuration file. The domU +serves as a network file server. + +[[!template id=filecontent name="/usr/pkg/etc/xen/foo" text=""" +name = "domU-id" +kernel = "/netbsd-XEN3PAE_DOMU-i386-foo.gz" +memory = 1024 +vif = [ 'mac=aa:00:00:d1:00:09,bridge=bridge0' ] +disk = [ 'file:/n0/xen/foo-wd0,0x0,w', + 'file:/n0/xen/foo-wd1,0x1,w' ] +"""]] + +The domain will have name given in the `name` setting. The kernel has the +host/domU name in it, so that on the dom0 one can update the various +domUs independently. The `vif` line causes an interface to be provided, +with a specific mac address (do not reuse MAC addresses!), in bridge +mode. Two disks are provided, and they are both writable; the bits +are stored in files and Xen attaches them to a vnd(4) device in the +dom0 on domain creation. The system treats xbd0 as the boot device +without needing explicit configuration. + +By convention, domain config files are kept in `/usr/pkg/etc/xen`. Note +that "xl create" takes the name of a config file, while other commands +take the name of a domain. + +Examples of commands: + +[[!template id=programlisting text=""" +xl create /usr/pkg/etc/xen/foo +xl console domU-id +xl create -c /usr/pkg/etc/xen/foo +xl shutdown domU-id +xl list +"""]] + +Typing `^]` will exit the console session. Shutting down a domain is +equivalent to pushing the power button; a NetBSD domU will receive a +power-press event and do a clean shutdown. Shutting down the dom0 +will trigger controlled shutdowns of all configured domUs. + +CPU and memory +-------------- + +A domain is provided with some number of vcpus, less than the number +of CPUs seen by the hypervisor. For a domU, it is controlled +from the config file by the "vcpus = N" directive. + +A domain is provided with memory; this is controlled in the config +file by "memory = N" (in megabytes). In the straightforward case, the +sum of the the memory allocated to the dom0 and all domUs must be less +than the available memory. + +Xen also provides a "balloon" driver, which can be used to let domains +use more memory temporarily. + +Virtual disks +------------- + +In domU config files, the disks are defined as a sequence of 3-tuples: + + * The first element is "method:/path/to/disk". Common methods are + "file:" for a file-backed vnd, and "phy:" for something that is already + a device, such as an LVM logical volume. + + * The second element is an artifact of how virtual disks are passed to + Linux, and a source of confusion with NetBSD Xen usage. Linux domUs + are given a device name to associate with the disk, and values like + "hda1" or "sda1" are common. In a NetBSD domU, the first disk appears + as xbd0, the second as xbd1, and so on. However, xl demands a + second argument. The name given is converted to a major/minor by + calling stat(2) on the name in /dev and this is passed to the domU. + In the general case, the dom0 and domU can be different operating + systems, and it is an unwarranted assumption that they have consistent + numbering in /dev, or even that the dom0 OS has a /dev. With NetBSD + as both dom0 and domU, using values of 0x0 for the first disk and 0x1 + for the second works fine and avoids this issue. For a GNU/Linux + guest, one can create /dev/hda1 in /dev, or to pass 0x301 for + /dev/hda1. + + * The third element is "w" for writable disks, and "r" for read-only + disks. + +Example: +[[!template id=filecontent name="/usr/pkg/etc/xen/foo" text=""" +disk = [ 'file:/n0/xen/foo-wd0,0x0,w' ] +"""]] + +Note that NetBSD by default creates only vnd[0123]. If you need more +than 4 total virtual disks at a time, run e.g. "./MAKEDEV vnd4" in the +dom0. + +Note that NetBSD by default creates only xbd[0123]. If you need more +virtual disks in a domU, run e.g. "./MAKEDEV xbd4" in the domU. + +Virtual Networking +------------------ + +Xen provides virtual Ethernets, each of which connects the dom0 and a +domU. For each virtual network, there is an interface "xvifN.M" in +the dom0, and a matching interface xennetM (NetBSD name) in domU index N. +The interfaces behave as if there is an Ethernet with two +adapters connected. From this primitive, one can construct various +configurations. We focus on two common and useful cases for which +there are existing scripts: bridging and NAT. + +With bridging (in the example above), the domU perceives itself to be +on the same network as the dom0. For server virtualization, this is +usually best. Bridging is accomplished by creating a bridge(4) device +and adding the dom0's physical interface and the various xvifN.0 +interfaces to the bridge. One specifies "bridge=bridge0" in the domU +config file. The bridge must be set up already in the dom0; an +example /etc/ifconfig.bridge0 is: + +[[!template id=filecontent name="/etc/ifconfig.bridge0" text=""" +create +up +!brconfig bridge0 add wm0 +"""]] + +With NAT, the domU perceives itself to be behind a NAT running on the +dom0. This is often appropriate when running Xen on a workstation. +TODO: NAT appears to be configured by "vif = [ '' ]". + +The MAC address specified is the one used for the interface in the new +domain. The interface in dom0 will use this address XOR'd with +00:00:00:01:00:00. Random MAC addresses are assigned if not given. + +Starting domains automatically +------------------------------ + +To start domains `domU-netbsd` and `domU-linux` at boot and shut them +down cleanly on dom0 shutdown, add the following in rc.conf: + +[[!template id=filecontent name="/etc/rc.conf" text=""" +xendomains="domU-netbsd domU-linux" +"""]] + +#Creating a domU + +Creating domUs is almost entirely independent of operating system. We +have already presented the basics of config files. Note that you must +have already completed the dom0 setup so that "xl list" works. + +Creating a NetBSD domU +---------------------- + +See the earlier config file, and adjust memory. Decide on how much +storage you will provide, and prepare it (file or LVM). + +While the kernel will be obtained from the dom0 file system, the same +file should be present in the domU as /netbsd so that tools like +savecore(8) can work. (This is helpful but not necessary.) + +The kernel must be specifically for Xen and for use as a domU. The +i386 and amd64 provide the following kernels: + + i386 XEN3PAE_DOMU + amd64 XEN3_DOMU + +This will boot NetBSD, but this is not that useful if the disk is +empty. One approach is to unpack sets onto the disk outside of xen +(by mounting it, just as you would prepare a physical disk for a +system you can't run the installer on). + +A second approach is to run an INSTALL kernel, which has a miniroot +and can load sets from the network. To do this, copy the INSTALL +kernel to / and change the kernel line in the config file to: - #---------------------------------------------------------------------------- - # Set the kernel command line for the new domain. + kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU" - # Set root device. This one does matter for NetBSD - root = "xbd0" - # extra parameters passed to the kernel - # this is where you can set boot flags like -s, -a, etc ... - #extra = "" - - #---------------------------------------------------------------------------- - # Set according to whether you want the domain restarted when it exits. - # The default is False. - #autorestart = True - - # end of nbsd config file ==================================================== - -When a new domain is created, xen calls the -`/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface -created in *domain0*. This can be used to automatically configure the -xvif?.? interfaces in *domain0*. In our example, these will be bridged -with the bridge0 device in *domain0*, but the bridge has to exist first. -To do this, create the file `/etc/ifconfig.bridge0` and make it look -like this: - - create - !brconfig $int add ex0 up - -(replace `ex0` with the name of your physical interface). Then bridge0 -will be created on boot. See the MAN.BRIDGE.4 man page for details. - -So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a -working vif-bridge is also provided with xentools20) configuring: - - #!/bin/sh - #============================================================================ - # $NetBSD: howto.mdwn,v 1.5 2013/11/01 12:27:37 mspo Exp $ - # - # /usr/pkg/etc/xen/vif-bridge - # - # Script for configuring a vif in bridged mode with a dom0 interface. - # The xend(8) daemon calls a vif script when bringing a vif up or down. - # The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp - # in the ``vif-script'' field. - # - # Usage: vif-bridge up|down [var=value ...] - # - # Actions: - # up Adds the vif interface to the bridge. - # down Removes the vif interface from the bridge. - # - # Variables: - # domain name of the domain the interface is on (required). - # vifq vif interface name (required). - # mac vif MAC address (required). - # bridge bridge to add the vif to (required). - # - # Example invocation: - # - # vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0 - # - #============================================================================ - - # Exit if anything goes wrong - set -e - - echo "vif-bridge $*" - - # Operation name. - OP=$1; shift - - # Pull variables in args into environment - for arg ; do export "${arg}" ; done - - # Required parameters. Fail if not set. - domain=${domain:?} - vif=${vif:?} - mac=${mac:?} - bridge=${bridge:?} - - # Optional parameters. Set defaults. - ip=${ip:-''} # default to null (do nothing) - - # Are we going up or down? - case $OP in - up) brcmd='add' ;; - down) brcmd='delete' ;; - *) - echo 'Invalid command: ' $OP - echo 'Valid commands are: up, down' - exit 1 - ;; - esac - - # Don't do anything if the bridge is "null". - if [ "${bridge}" = "null" ] ; then - exit - fi - - # Don't do anything if the bridge doesn't exist. - if ! ifconfig -l | grep "${bridge}" >/dev/null; then - exit - fi - - # Add/remove vif to/from bridge. - ifconfig x${vif} $OP - brconfig ${bridge} ${brcmd} x${vif} - -Now, running - - xm create -c /usr/pkg/etc/xen/nbsd - -should create a domain and load a NetBSD kernel in it. (Note: `-c` -causes xm to connect to the domain's console once created.) The kernel -will try to find its root file system on xbd0 (i.e., wd0e) which hasn't -been created yet. wd0e will be seen as a disk device in the new domain, -so it will be 'sub-partitioned'. We could attach a ccd to wd0e in -*domain0* and partition it, newfs and extract the NetBSD/i386 or amd64 -tarballs there, but there's an easier way: load the -`netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets. -Like other install kernels, it contains a ramdisk with sysinst, so you -can install NetBSD using sysinst on your new domain. +Then, start the domain as "xl create -c configfile". -If you want to install NetBSD/Xen with a CDROM image, the following line -should be used in the `/usr/pkg/etc/xen/nbsd` file: +Alternatively, if you want to install NetBSD/Xen with a CDROM image, the following +line should be used in the config file. disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ] After booting the domain, the option to install via CDROM may be -selected. The CDROM device should be changed to `xbd1d`. +selected. The CDROM device should be changed to `xbd1d`. -Once done installing, `halt -p` the new domain (don't reboot or halt, it -would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the -config file), switch the config file back to the XEN3\_DOMU kernel, and -start the new domain again. Now it should be able to use `root on xbd0a` -and you should have a second, functional NetBSD system on your xen -installation. +Once done installing, "halt -p" the new domain (don't reboot or halt, +it would reload the INSTALL_XEN3_DOMU kernel even if you changed the +config file), switch the config file back to the XEN3_DOMU kernel, +and start the new domain again. Now it should be able to use "root on +xbd0a" and you should have a, functional NetBSD domU. +TODO: check if this is still accurate. When the new domain is booting you'll see some warnings about *wscons* and the pseudo-terminals. These can be fixed by editing the files `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in @@ -471,15 +496,21 @@ Finally, all screens must be commented o It is also desirable to add - powerd=YES + powerd=YES in rc.conf. This way, the domain will be properly shut down if -`xm shutdown -R` or `xm shutdown -H` is used on the domain0. +`xm shutdown -R` or `xm shutdown -H` is used on the dom0. -Your domain should be now ready to work, enjoy. +It is not strictly necessary to have a kernel (as /netbsd) in the domU +file system. However, various programs (e.g. netstat) will use that +kernel to look up symbols to read from kernel virtual memory. If +/netbsd is not the running kernel, those lookups will fail. (This is +not really a Xen-specific issue, but because the domU kernel is +obtained from the dom0, it is far more likely to be out of sync or +missing with Xen.) -Creating an unprivileged Linux domain (DomU) --------------------------------------------- +Creating a Linux domU +--------------------- Creating unprivileged Linux domains isn't much different from unprivileged NetBSD domains, but there are some details to know. @@ -490,256 +521,184 @@ the example below) disk = [ 'phy:/dev/wd0e,0x1,w' ] does matter to Linux. It wants a Linux device number here (e.g. 0x300 -for hda). Linux builds device numbers as: (major \<\< 8 + minor). So, -hda1 which has major 3 and minor 1 on a Linux system will have device -number 0x301. Alternatively, devices names can be used (hda, hdb, ...) -as xentools has a table to map these names to devices numbers. To export -a partition to a Linux guest we can use: +for hda). Linux builds device numbers as: (major \<\< 8 + minor). +So, hda1 which has major 3 and minor 1 on a Linux system will have +device number 0x301. Alternatively, devices names can be used (hda, +hdb, ...) as xentools has a table to map these names to devices +numbers. To export a partition to a Linux guest we can use: - disk = [ 'phy:/dev/wd0e,0x300,w' ] - root = "/dev/hda1 ro" + disk = [ 'phy:/dev/wd0e,0x300,w' ] + root = "/dev/hda1 ro" and it will appear as /dev/hda on the Linux system, and be used as root partition. -To install the Linux system on the partition to be exported to the guest -domain, the following method can be used: install sysutils/e2fsprogs -from pkgsrc. Use mke2fs to format the partition that will be the root -partition of your Linux domain, and mount it. Then copy the files from a -working Linux system, make adjustments in `/etc` (fstab, network -config). It should also be possible to extract binary packages such as -.rpm or .deb directly to the mounted partition using the appropriate -tool, possibly running under NetBSD's Linux emulation. Once the -filesystem has been populated, umount it. If desirable, the filesystem -can be converted to ext3 using tune2fs -j. It should now be possible to -boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels -available in the Xen binary distribution. +To install the Linux system on the partition to be exported to the +guest domain, the following method can be used: install +sysutils/e2fsprogs from pkgsrc. Use mke2fs to format the partition +that will be the root partition of your Linux domain, and mount it. +Then copy the files from a working Linux system, make adjustments in +`/etc` (fstab, network config). It should also be possible to extract +binary packages such as .rpm or .deb directly to the mounted partition +using the appropriate tool, possibly running under NetBSD's Linux +emulation. Once the file system has been populated, umount it. If +desirable, the file system can be converted to ext3 using tune2fs -j. +It should now be possible to boot the Linux guest domain, using one of +the vmlinuz-\*-xenU kernels available in the Xen binary distribution. -To get the linux console right, you need to add: +To get the Linux console right, you need to add: extra = "xencons=tty1" -to your configuration since not all linux distributions auto-attach a +to your configuration since not all Linux distributions auto-attach a tty to the xen console. -Creating an unprivileged Solaris domain (DomU) ----------------------------------------------- +Creating a Solaris domU +----------------------- -Download an Opensolaris [release](http://opensolaris.org/os/downloads/) -or [development snapshot](http://genunix.org/) DVD image. Attach the DVD -image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem -image to your dom0 filesystem. - - dom0# mkdir /root/solaris - dom0# vnconfig vnd0 osol-1002-124-x86.iso - dom0# mount /dev/vnd0a /mnt - - ## for a 64-bit guest - dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris - dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris - - ## for a 32-bit guest - dom0# cp /mnt/boot/x86.microroot /root/solaris - dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris - - dom0# umount /mnt - - -Keep the MAN.VND.4 configured. For some reason the boot process stalls -unless the DVD image is attached to the guest as a "phy" device. Create -an initial configuration file with the following contents. Substitute -*/dev/wd0k* with an empty partition at least 8 GB large. - - memory = 640 - name = 'solaris' - disk = [ 'phy:/dev/wd0k,0,w' ] - disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ] - vif = [ 'bridge=bridge0' ] - kernel = '/root/solaris/unix' - ramdisk = '/root/solaris/x86.microroot' - # for a 64-bit guest - extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom' - # for a 32-bit guest - #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom' - - -Start the guest. - - dom0# xm create -c solaris.cfg - Started domain solaris - v3.3.2 chgset 'unavailable' - SunOS Release 5.11 Version snv_124 64-bit - Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. - Use is subject to license terms. - Hostname: opensolaris - Remounting root read/write - Probing for device nodes ... - WARNING: emlxs: ddi_modopen drv/fct failed: err 2 - Preparing live image for use - Done mounting Live image - - -Make sure the network is configured. Note that it can take a minute for -the xnf0 interface to appear. - - opensolaris console login: jack - Password: jack - Sun Microsystems Inc. SunOS 5.11 snv_124 November 2008 - jack@opensolaris:~$ pfexec sh - sh-3.2# ifconfig -a - sh-3.2# exit - - -Set a password for VNC and start the VNC server which provides the X11 -display where the installation program runs. - - jack@opensolaris:~$ vncpasswd - Password: solaris - Verify: solaris - jack@opensolaris:~$ cp .Xclients .vnc/xstartup - jack@opensolaris:~$ vncserver :1 - - -From a remote machine connect to the VNC server. Use `ifconfig xnf0` on -the guest to find the correct IP address to use. - - remote$ vncviewer 172.18.2.99:1 - - -It is also possible to launch the installation on a remote X11 display. - - jack@opensolaris:~$ export DISPLAY=172.18.1.1:0 - jack@opensolaris:~$ pfexec gui-install - - -After the GUI installation is complete you will be asked to reboot. -Before that you need to determine the ZFS ID for the new boot filesystem -and update the configuration file accordingly. Return to the guest -console. - - jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs - bootfs = 43 - ^C - jack@opensolaris:~$ - - -The final configuration file should look like this. Note in particular -the last line. - - memory = 640 - name = 'solaris' - disk = [ 'phy:/dev/wd0k,0,w' ] - vif = [ 'bridge=bridge0' ] - kernel = '/root/solaris/unix' - ramdisk = '/root/solaris/x86.microroot' - extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"' - - -Restart the guest to verify it works correctly. - - dom0# xm destroy solaris - dom0# xm create -c solaris.cfg - Using config file "./solaris.cfg". - v3.3.2 chgset 'unavailable' - Started domain solaris - SunOS Release 5.11 Version snv_124 64-bit - Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. - Use is subject to license terms. - WARNING: emlxs: ddi_modopen drv/fct failed: err 2 - Hostname: osol - Configuring devices. - Loading smf(5) service descriptions: 160/160 - svccfg import warnings. See /var/svc/log/system-manifest-import:default.log . - Reading ZFS config: done. - Mounting ZFS filesystems: (6/6) - Creating new rsa public/private host key pair - Creating new dsa public/private host key pair - - osol console login: - - -Using PCI devices in guest domains -================================== - -The domain0 can give other domains access to selected PCI devices. This -can allow, for example, a non-privileged domain to have access to a -physical network interface or disk controller. However, keep in mind -that giving a domain access to a PCI device most likely will give the -domain read/write access to the whole physical memory, as PCs don't have -an IOMMU to restrict memory access to DMA-capable device. Also, it's not -possible to export ISA devices to non-domain0 domains (which means that -the primary VGA adapter can't be exported. A guest domain trying to -access the VGA registers will panic). - -This functionality is only available in NetBSD-5.1 (and later) domain0 -and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as -support has not been ported to later versions at this time. - -For a PCI device to be exported to a domU, is has to be attached to the -`pciback` driver in domain0. Devices passed to the domain0 via the -pciback.hide boot parameter will attach to `pciback` instead of the -usual driver. The list of devices is specified as `(bus:dev.func)`, +See possibly outdated +[Solaris domU instructions](/ports/xen/howto-solaris/). + + +PCI passthrough: Using PCI devices in guest domains +--------------------------------------------------- + +The dom0 can give other domains access to selected PCI +devices. This can allow, for example, a non-privileged domain to have +access to a physical network interface or disk controller. However, +keep in mind that giving a domain access to a PCI device most likely +will give the domain read/write access to the whole physical memory, +as PCs don't have an IOMMU to restrict memory access to DMA-capable +device. Also, it's not possible to export ISA devices to non-dom0 +domains, which means that the primary VGA adapter can't be exported. +A guest domain trying to access the VGA registers will panic. + +If the dom0 is NetBSD, it has to be running Xen 3.1, as support has +not been ported to later versions at this time. + +For a PCI device to be exported to a domU, is has to be attached to +the "pciback" driver in dom0. Devices passed to the dom0 via the +pciback.hide boot parameter will attach to "pciback" instead of the +usual driver. The list of devices is specified as "(bus:dev.func)", where bus and dev are 2-digit hexadecimal numbers, and func a single-digit number: - pciback.hide=(00:0a.0)(00:06.0) + pciback.hide=(00:0a.0)(00:06.0) -pciback devices should show up in the domain0's boot messages, and the +pciback devices should show up in the dom0's boot messages, and the devices should be listed in the `/kern/xen/pci` directory. -PCI devices to be exported to a domU are listed in the `pci` array of -the domU's config file, with the format `'0000:bus:dev.func'` +PCI devices to be exported to a domU are listed in the "pci" array of +the domU's config file, with the format "0000:bus:dev.func". + + pci = [ '0000:00:06.0', '0000:00:0a.0' ] + +In the domU an "xpci" device will show up, to which one or more pci +buses will attach. Then the PCI drivers will attach to PCI buses as +usual. Note that the default NetBSD DOMU kernels do not have "xpci" +or any PCI drivers built in by default; you have to build your own +kernel to use PCI devices in a domU. Here's a kernel config example; +note that only the "xpci" lines are unusual. + + include "arch/i386/conf/XEN3_DOMU" + + # Add support for PCI buses to the XEN3_DOMU kernel + xpci* at xenbus ? + pci* at xpci ? + + # PCI USB controllers + uhci* at pci? dev ? function ? # Universal Host Controller (Intel) + + # USB bus support + usb* at uhci? + + # USB Hubs + uhub* at usb? + uhub* at uhub? port ? configuration ? interface ? + + # USB Mass Storage + umass* at uhub? port ? configuration ? interface ? + wd* at umass? + # SCSI controllers + ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI + + # SCSI bus support (for both ahc and umass) + scsibus* at scsi? + + # SCSI devices + sd* at scsibus? target ? lun ? # SCSI disk drives + cd* at scsibus? target ? lun ? # SCSI CD-ROM drives + + +#NetBSD as a domU in a VPS + +The bulk of the HOWTO is about using NetBSD as a dom0 on your own +hardware. This section explains how to deal with Xen in a domU as a +virtual private server where you do not control or have access to the +dom0. This is not intended to be an exhaustive list of VPS providers; +only a few are mentioned that specifically support NetBSD. + +VPS operators provide varying degrees of access and mechanisms for +configuration. The big issue is usually how one controls which kernel +is booted, because the kernel is nominally in the dom0 file system (to +which VPS users do not normally have access). A second issue is how +to install NetBSD. +A VPS user may want to compile a kernel for security updates, to run +npf, run IPsec, or any other reason why someone would want to change +their kernel. + +One approach is to have an administrative interface to upload a kernel, +or to select from a prepopulated list. Other approaches are pygrub +(deprecated) and pvgrub, which are ways to have a bootloader obtain a +kernel from the domU file system. This is closer to a regular physical +computer, where someone who controls a machine can replace the kernel. + +A second issue is multiple CPUs. With NetBSD 6, domUs support +multiple vcpus, and it is typical for VPS providers to enable multiple +CPUs for NetBSD domUs. + +pvgrub +------ + +pvgrub is a version of grub that uses PV operations instead of BIOS +calls. It is booted from the dom0 as the domU kernel, and then reads +/grub/menu.lst and loads a kernel from the domU file system. + +[Panix](http://www.panix.com/) lets users use pvgrub. Panix reports +that pvgrub works with FFsv2 with 16K/2K and 32K/4K block/frag sizes +(and hence with defaults from "newfs -O 2"). See [Panix's pvgrub +page](http://www.panix.com/v-colo/grub.html), which describes only +Linux but should be updated to cover NetBSD :-). + +[prgmr.com](http://prgmr.com/) also lets users with pvgrub to boot +their own kernel. See then [prgmr.com NetBSD +HOWTO](http://wiki.prgmr.com/mediawiki/index.php/NetBSD_as_a_DomU) +(which is in need of updating). + +It appears that [grub's FFS +code](http://xenbits.xensource.com/hg/xen-unstable.hg/file/bca284f67702/tools/libfsimage/ufs/fsys_ufs.c) +does not support all aspects of modern FFS, but there are also reports +that FFSv2 works fine. At prgmr, typically one has an ext2 or FAT +partition for the kernel with the intent that grub can understand it, +which leads to /netbsd not being the actual kernel. One must remember +to update the special boot partition. + +pygrub +------- + +pygrub runs in the dom0 and looks into the domU file system. This +implies that the domU must have a kernel in a file system in a format +known to pygrub. + +pygrub doesn't seem to work to load Linux images under NetBSD dom0, +and is inherently less secure than pvgrub due to running inside dom0. For both these +reasons, pygrub should not be used, and is only still present so that +historical DomU images using it still work. - pci = [ '0000:00:06.0', '0000:00:0a.0' ] +As of 2014, pygrub seems to be of mostly historical +interest. New DomUs should use pvgrub. -In the domU an `xpci` device will show up, to which one or more pci -busses will attach. Then the PCI drivers will attach to PCI busses as -usual. Note that the default NetBSD DOMU kernels do not have `xpci` or -any PCI drivers built in by default; you have to build your own kernel -to use PCI devices in a domU. Here's a kernel config example: - - include "arch/i386/conf/XEN3_DOMU" - #include "arch/i386/conf/XENU" # in NetBSD 3.0 - - # Add support for PCI busses to the XEN3_DOMU kernel - xpci* at xenbus ? - pci* at xpci ? - - # Now add PCI and related devices to be used by this domain - # USB Controller and Devices - - # PCI USB controllers - uhci* at pci? dev ? function ? # Universal Host Controller (Intel) - - # USB bus support - usb* at uhci? - - # USB Hubs - uhub* at usb? - uhub* at uhub? port ? configuration ? interface ? - - # USB Mass Storage - umass* at uhub? port ? configuration ? interface ? - wd* at umass? - # SCSI controllers - ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI - - # SCSI bus support (for both ahc and umass) - scsibus* at scsi? - - # SCSI devices - sd* at scsibus? target ? lun ? # SCSI disk drives - cd* at scsibus? target ? lun ? # SCSI CD-ROM drives - -Links and further information -============================= - -- The HowTo on - Installing into RAID-1 - gives some hints on using Xen (grub) with NetBSD's RAIDframe -- Harold Gutch wrote documentation on - setting up a Linux DomU with a NetBSD Dom0 -- An example of how to use NetBSD's native bootloader to load - NetBSD/Xen instead of Grub can be found in the i386/amd64 MAN.BOOT.8 - and MAN.BOOT.CFG.5 manpages. +Amazon +------ +See the [Amazon EC2 page](/amazon_ec2/).