1: Introduction
2: ============
3:
4: [![[Xen
5: screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png)
6:
7: Xen is a virtual machine monitor or hypervisor for x86 hardware
8: (i686-class or higher), which supports running multiple guest
9: operating systems on a single physical machine. With Xen, one uses
10: the Xen kernel to control the CPU, memory and console, a dom0
11: operating system which mediates access to other hardware (e.g., disks,
12: network, USB), and one or more domU operating systems which operate in
13: an unprivileged virtualized environment. IO requests from the domU
14: systems are forwarded by the hypervisor (Xen) to the dom0 to be
15: fulfilled.
16:
17: Xen supports two styles of guests. The original is Para-Virtualized
18: (PV) which means that the guest OS does not attempt to access hardware
19: directly, but instead makes hypercalls to the hypervisor. This is
20: analogous to a user-space program making system calls. (The dom0
21: operating system uses PV calls for some functions, such as updating
22: memory mapping page tables, but has direct hardware access for disk
23: and network.) PV guests must be specifically coded for Xen.
24:
25: The more recent style is HVM, which means that the guest does not have
26: code for Xen and need not be aware that it is running under Xen.
27: Attempts to access hardware registers are trapped and emulated. This
28: style is less efficient but can run unmodified guests.
29:
30: Generally any amd64 machine will work with Xen and PV guests. In
31: theory i386 computers without amd64 support can be used for Xen <=
32: 4.2, but we have no recent reports of this working (this is a hint).
33: For HVM guests, the VT or VMX cpu feature (Intel) or SVM/HVM/VT
34: (amd64) is needed; "cpuctl identify 0" will show this. TODO: Clean up
35: and check the above features.
36:
37: At boot, the dom0 kernel is loaded as a module with Xen as the kernel.
38: The dom0 can start one or more domUs. (Booting is explained in detail
39: in the dom0 section.)
40:
41: NetBSD supports Xen in that it can serve as dom0, be used as a domU,
42: and that Xen kernels and tools are available in pkgsrc. This HOWTO
43: attempts to address both the case of running a NetBSD dom0 on hardware
44: and running domUs under it (NetBSD and other), and also running NetBSD
45: as a domU in a VPS.
46:
47: Some versions of Xen support "PCI passthrough", which means that
48: specific PCI devices can be made available to a specific domU instead
49: of the dom0. This can be useful to let a domU run X11, or access some
50: network interface or other peripheral.
51:
52: Prerequisites
53: -------------
54:
55: Installing NetBSD/Xen is not extremely difficult, but it is more
56: complex than a normal installation of NetBSD.
57: In general, this HOWTO is occasionally overly restrictive about how
58: things must be done, guiding the reader to stay on the established
59: path when there are no known good reasons to stray.
60:
61: This HOWTO presumes a basic familiarity with the Xen system
62: architecture. This HOWTO presumes familiarity with installing NetBSD
63: on i386/amd64 hardware and installing software from pkgsrc.
64: See also the [Xen website](http://www.xenproject.org/).
65:
66: History
67: -------
68:
69: NetBSD used to support Xen2; this has been removed.
70:
71: Before NetBSD's native bootloader could support Xen, the use of
72: grub was recommended. If necessary, see the
73: [old grub information](/ports/xen/howto-grub/).
74:
75: Versions of Xen and NetBSD
76: ==========================
77:
78: Most of the installation concepts and instructions are independent
79: of Xen version and NetBSD version. This section gives advice on
80: which version to choose. Versions not in pkgsrc and older unsupported
81: versions of NetBSD are intentionally ignored.
82:
83: Xen
84: ---
85:
86: In NetBSD, xen is provided in pkgsrc, via matching pairs of packages
87: xenkernel and xentools. We will refer only to the kernel versions,
88: but note that both packages must be installed together and must have
89: matching versions.
90:
91: xenkernel3 and xenkernel33 provide Xen 3.1 and 3.3. These no longer
92: receive security patches and should not be used. Xen 3.1 supports PCI
93: passthrough. Xen 3.1 supports non-PAE on i386.
94:
95: xenkernel41 provides Xen 4.1. This is no longer maintained by Xen,
96: but as of 2014-12 receives backported security patches. It is a
97: reasonable although trailing-edge choice.
98:
99: xenkernel42 provides Xen 4.2. This is maintained by Xen, but old as
100: of 2014-12.
101:
102: Ideally newer versions of Xen will be added to pkgsrc.
103:
104: Note that NetBSD support is called XEN3. It works with 3.1 through
105: 4.2 because the hypercall interface has been stable.
106:
107: Xen command program
108: -------------------
109:
110: Early Xen used a program called "xm" to manipulate the system from the
111: dom0. Starting in 4.1, a replacement program with similar behavior
112: called "xl" is provided. In 4.2 and later, "xl" is preferred. 4.4 is
113: the last version that has "xm".
114:
115: NetBSD
116: ------
117:
118: The netbsd-5, netbsd-6, netbsd-7, and -current branches are all
119: reasonable choices, with more or less the same considerations for
120: non-Xen use. Therefore, netbsd-6 is recommended as the stable version
121: of the most recent release for production use. For those wanting to
122: learn Xen or without production stability concerns, netbsd-7 is likely
123: most appropriate.
124:
125: As of NetBSD 6, a NetBSD domU will support multiple vcpus. There is
126: no SMP support for NetBSD as dom0. (The dom0 itself doesn't really
127: need SMP; the lack of support is really a problem when using a dom0 as
128: a normal computer.)
129:
130: Architecture
131: ------------
132:
133: Xen itself can run on i386 or amd64 machines. (Practically, almost
134: any computer where one would want to run Xen supports amd64.) If
135: using an i386 NetBSD kernel for the dom0, PAE is required (PAE
136: versions are built by default). While i386 dom0 works fine, amd64 is
137: recommended as more normal.
138:
139: Xen 4.2 is the last version to support i386 as a host. TODO: Clarify
140: if this is about the CPU having to be amd64, or about the dom0 kernel
141: having to be amd64.
142:
143: One can then run i386 domUs and amd64 domUs, in any combination. If
144: running an i386 NetBSD kernel as a domU, the PAE version is required.
145: (Note that emacs (at least) fails if run on i386 with PAE when built
146: without, and vice versa, presumably due to bugs in the undump code.)
147:
148: Recommendation
149: --------------
150:
151: Therefore, this HOWTO recommends running xenkernel42 (and xentools42),
152: xl, the NetBSD 6 stable branch, and to use an amd64 kernel as the
153: dom0. Either the i386 or amd64 of NetBSD may be used as domUs.
154:
155: NetBSD as a dom0
156: ================
157:
158: NetBSD can be used as a dom0 and works very well. The following
159: sections address installation, updating NetBSD, and updating Xen.
160: Note that it doesn't make sense to talk about installing a dom0 OS
161: without also installing Xen itself. We first address installing
162: NetBSD, which is not yet a dom0, and then adding Xen, pivoting the
163: NetBSD install to a dom0 install by just changing the kernel and boot
164: configuration.
165:
166: Styles of dom0 operation
167: ------------------------
168:
169: There are two basic ways to use Xen. The traditional method is for
170: the dom0 to do absolutely nothing other than providing support to some
171: number of domUs. Such a system was probably installed for the sole
172: purpose of hosting domUs, and sits in a server room on a UPS.
173:
174: The other way is to put Xen under a normal-usage computer, so that the
175: dom0 is what the computer would have been without Xen, perhaps a
176: desktop or laptop. Then, one can run domUs at will. Purists will
177: deride this as less secure than the previous approach, and for a
178: computer whose purpose is to run domUs, they are right. But Xen and a
179: dom0 (without domUs) is not meaingfully less secure than the same
180: things running without Xen. One can boot Xen or boot regular NetBSD
181: alternately with little problems, simply refraining from starting the
182: Xen daemons when not running Xen.
183:
184: Note that NetBSD as dom0 does not support multiple CPUs. This will
185: limit the performance of the Xen/dom0 workstation approach.
186:
187: Installation of NetBSD
188: ----------------------
189:
190: First,
191: [install NetBSD/amd64](/guide/inst/)
192: just as you would if you were not using Xen.
193: However, the partitioning approach is very important.
194:
195: If you want to use RAIDframe for the dom0, there are no special issues
196: for Xen. Typically one provides RAID storage for the dom0, and the
197: domU systems are unaware of RAID. The 2nd-stage loader bootxx_* skips
198: over a RAID1 header to find /boot from a filesystem within a RAID
199: partition; this is no different when booting Xen.
200:
201: There are 4 styles of providing backing storage for the virtual disks
202: used by domUs: raw partitions, LVM, file-backed vnd(4), and SAN,
203:
204: With raw partitions, one has a disklabel (or gpt) partition sized for
205: each virtual disk to be used by the domU. (If you are able to predict
206: how domU usage will evolve, please add an explanation to the HOWTO.
207: Seriously, needs tend to change over time.)
208:
209: One can use [lvm(8)](/guide/lvm/) to create logical devices to use
210: for domU disks. This is almost as efficient as raw disk partitions
211: and more flexible. Hence raw disk partitions should typically not
212: be used.
213:
214: One can use files in the dom0 filesystem, typically created by dd'ing
215: /dev/zero to create a specific size. This is somewhat less efficient,
216: but very convenient, as one can cp the files for backup, or move them
217: between dom0 hosts.
218:
219: Finally, in theory one can place the files backing the domU disks in a
220: SAN. (This is an invitation for someone who has done this to add a
221: HOWTO page.)
222:
223: Installation of Xen
224: -------------------
225:
226: In the dom0, install sysutils/xenkernel42 and sysutils/xentools42 from
227: pkgsrc (or another matching pair).
228: See [the pkgsrc
229: documentation](http://www.NetBSD.org/docs/pkgsrc/) for help with pkgsrc.
230:
231: For Xen 3.1, support for HVM guests is in sysutils/xentool3-hvm. More
232: recent versions have HVM support integrated in the main xentools
233: package. It is entirely reasonable to run only PV guests.
234:
235: Next you need to install the selected Xen kernel itself, which is
236: installed by pkgsrc as "/usr/pkg/xen*-kernel/xen.gz". Copy it to /.
237: For debugging, one may copy xen-debug.gz; this is conceptually similar
238: to DIAGNOSTIC and DEBUG in NetBSD. xen-debug.gz is basically only
239: useful with a serial console. Then, place a NetBSD XEN3_DOM0 kernel
240: in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz
241: of a NetBSD build. Both xen and NetBSD may be left compressed. (If
242: using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.)
243:
244: In a dom0 kernel, kernfs is mandatory for xend to comunicate with the
245: kernel, so ensure that /kern is in fstab.
246:
247: Because you already installed NetBSD, you have a working boot setup
248: with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the
249: beginning of your root filesystem, /boot present, and likely
250: /boot.cfg. (If not, fix before continuing!)
251:
252: See boot.cfg(5) for an example. The basic line is
253:
254: "menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M"
255:
256: which specifies that the dom0 should have 256M, leaving the rest to be
257: allocated for domUs.
258:
259: As with non-Xen systems, you should have a line to boot /netbsd (a
260: kernel that works without Xen) and fallback versions of the non-Xen
261: kernel, Xen, and the dom0 kernel.
262:
263: The [HowTo on Installing into
264: RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html)
265: explains how to set up booting a dom0 with Xen using grub with
266: NetBSD's RAIDframe. (This is obsolete with the use of NetBSD's native
267: boot.)
268:
269: Configuring Xen
270: ---------------
271:
272: Now, you have a system that will boot Xen and the dom0 kernel, and
273: just run the dom0 kernel. There will be no domUs, and none can be
274: started because you still have to configure the dom0 tools. The
275: daemons which should be run vary with Xen version and with whether one
276: is using xm or xl. Note that xend is for supporting "xm", and should
277: only be used if you plan on using "xm". Do NOT enable xend if you
278: plan on using "xl" as it will cause problems.
279:
280: TODO: Give 3.1 advice (or remove it from pkgsrc).
281:
282: For 3.3 (and thus xm), add to rc.conf (but note that you should have
283: installed 4.1 or 4.2):
284:
285: xend=YES
286: xenbackendd=YES
287:
288: For 4.1 (and thus xm), add to rc.conf:
289:
290: xend=YES
291: xencommons=YES
292:
293: TODO: Explain why if xm is preferred on 4.1, rc.d/xendomains has xl.
294:
295: For 4.2 with xl, add to rc.conf:
296:
297: TODO: explain if there is a xend replacement
298: xencommons=YES
299:
300: TODO: Recommend for/against xen-watchdog.
301:
302: Updating NetBSD in a dom0
303: -------------------------
304:
305: This is just like updating NetBSD on bare hardware, assuming the new
306: version supports the version of Xen you are running. Generally, one
307: replaces the kernel and reboots, and then overlays userland binaries
308: and adjusts /etc.
309:
310: Note that one must update both the non-Xen kernel typically used for
311: rescue purposes and the DOM0 kernel used with Xen.
312:
313: To convert from grub to /boot, install an mbr bootblock with fdisk,
314: bootxx_ with installboot, /boot and /boot.cfg. This really should be
315: no different than completely reinstalling boot blocks on a non-Xen
316: system.
317:
318: Updating Xen versions
319: ---------------------
320:
321: Updating Xen is conceptually not difficult, but can run into all the
322: issues found when installing Xen. Assuming migration from 4.1 to 4.2,
323: remove the xenkernel41 and xentools41 packages and install the
324: xenkernel42 and xentools42 packages. Copy the 4.2 xen.gz to /.
325:
326: Ensure that the contents of /etc/rc.d/xen* are correct. Enable the
327: correct set of daemons. Ensure that the domU config files are valid
328: for the new version.
329:
330:
331: Unprivileged domains (domU)
332: ===========================
333:
334: This section describes general concepts about domUs. It does not
335: address specific domU operating systems or how to install them.
336:
337: Provided Resources for PV domains
338: ---------------------------------
339:
340: TODO: Explain that domUs get cpu, memory, disk and network.
341: Explain that randomness can be an issue.
342:
343: Virtual disks
344: -------------
345:
346: TODO: Explain how to set up files for vnd and that one should write all zeros to preallocate.
347: TODO: Explain in what NetBSD versions sparse vnd files do and don't work.
348:
349: Virtual Networking
350: ------------------
351:
352: TODO: explain xvif concept, and that it's general.
353:
354: There are two normal styles: bridging and NAT.
355:
356: With bridging, the domU perceives itself to be on the same network as
357: the dom0. For server virtualization, this is usually best.
358:
359: With NAT, the domU perceives itself to be behind a NAT running on the
360: dom0. This is often appropriate when running Xen on a workstation.
361:
362: One can construct arbitrary other configurations, but there is no
363: script support.
364:
365: Config files
366: ------------
367:
368: TODO: give example config files. Use both lvm and vnd.
369:
370: TODO: explain the mess with 3 arguments for disks and how to cope (0x1).
371:
372: Starting domains
373: ----------------
374:
375: TODO: Explain "xm start" and "xl start". Explain rc.d/xendomains.
376:
377: TODO: Explain why 4.1 rc.d/xendomains has xl, when one should use xm
378: on 4.1.
379:
380: Creating specific unprivileged domains (domU)
381: =============================================
382:
383: Creating domUs is almost entirely independent of operating system. We
384: first explain NetBSD, and then differences for Linux and Solaris.
385:
386: Creating an unprivileged NetBSD domain (domU)
387: ---------------------------------------------
388:
389: Once you have *domain0* running, you need to start the xen tool daemon
390: (`/usr/pkg/share/examples/rc.d/xend start`) and the xen backend daemon
391: (`/usr/pkg/share/examples/rc.d/xenbackendd start` for Xen3\*,
392: `/usr/pkg/share/examples/rc.d/xencommons start` for Xen4.\*). Make sure
393: that `/dev/xencons` and `/dev/xenevt` exist before starting `xend`. You
394: can create them with this command:
395:
396: # cd /dev && sh MAKEDEV xen
397:
398: xend will write logs to `/var/log/xend.log` and
399: `/var/log/xend-debug.log`. You can then control xen with the xm tool.
400: 'xm list' will show something like:
401:
402: # xm list
403: Name Id Mem(MB) CPU State Time(s) Console
404: Domain-0 0 64 0 r---- 58.1
405:
406: 'xm create' allows you to create a new domain. It uses a config file in
407: PKG\_SYSCONFDIR for its parameters. By default, this file will be in
408: `/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which
409: will be executed in the new domain (this kernel is in the *domain0* file
410: system, not on the new domain virtual disk; but please note, you should
411: install the same kernel into *domainU* as `/netbsd` in order to make
412: your system tools, like savecore(8), work). A suitable kernel is
413: provided as part of the i386 and amd64 binary sets: XEN3\_DOMU.
414:
415: Here is an /usr/pkg/etc/xen/nbsd example config file:
416:
417: # -*- mode: python; -*-
418: #============================================================================
419: # Python defaults setup for 'xm create'.
420: # Edit this file to reflect the configuration of your system.
421: #============================================================================
422:
423: #----------------------------------------------------------------------------
424: # Kernel image file. This kernel will be loaded in the new domain.
425: kernel = "/home/bouyer/netbsd-XEN3_DOMU"
426: #kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"
427:
428: # Memory allocation (in megabytes) for the new domain.
429: memory = 128
430:
431: # A handy name for your new domain. This will appear in 'xm list',
432: # and you can use this as parameters for xm in place of the domain
433: # number. All domains must have different names.
434: #
435: name = "nbsd"
436:
437: # The number of virtual CPUs this domain has.
438: #
439: vcpus = 1
440:
441: #----------------------------------------------------------------------------
442: # Define network interfaces for the new domain.
443:
444: # Number of network interfaces (must be at least 1). Default is 1.
445: nics = 1
446:
447: # Define MAC and/or bridge for the network interfaces.
448: #
449: # The MAC address specified in ``mac'' is the one used for the interface
450: # in the new domain. The interface in domain0 will use this address XOR'd
451: # with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random
452: # MACs are assigned if not given.
453: #
454: # ``bridge'' is a required parameter, which will be passed to the
455: # vif-script called by xend(8) when a new domain is created to configure
456: # the new xvif interface in domain0.
457: #
458: # In this example, the xvif is added to bridge0, which should have been
459: # set up prior to the new domain being created -- either in the
460: # ``network'' script or using a /etc/ifconfig.bridge0 file.
461: #
462: vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ]
463:
464: #----------------------------------------------------------------------------
465: # Define the disk devices you want the domain to have access to, and
466: # what you want them accessible as.
467: #
468: # Each disk entry is of the form:
469: #
470: # phy:DEV,VDEV,MODE
471: #
472: # where DEV is the device, VDEV is the device name the domain will see,
473: # and MODE is r for read-only, w for read-write. You can also create
474: # file-backed domains using disk entries of the form:
475: #
476: # file:PATH,VDEV,MODE
477: #
478: # where PATH is the path to the file used as the virtual disk, and VDEV
479: # and MODE have the same meaning as for ``phy'' devices.
480: #
481: # VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index),
482: # but it does for Linux.
483: # Worse, the device has to exist in /dev/ of domain0, because xm will
484: # try to stat() it. This means that in order to load a Linux guest OS
485: # from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ...
486: # on domain0, with the major/minor from Linux :(
487: # Alternatively it's possible to specify the device number in hex,
488: # e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ...
489:
490: disk = [ 'phy:/dev/wd0e,0x1,w' ]
491: #disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ]
492: #disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ]
493:
494: #----------------------------------------------------------------------------
495: # Set the kernel command line for the new domain.
496:
497: # Set root device. This one does matter for NetBSD
498: root = "xbd0"
499: # extra parameters passed to the kernel
500: # this is where you can set boot flags like -s, -a, etc ...
501: #extra = ""
502:
503: #----------------------------------------------------------------------------
504: # Set according to whether you want the domain restarted when it exits.
505: # The default is False.
506: #autorestart = True
507:
508: # end of nbsd config file ====================================================
509:
510: When a new domain is created, xen calls the
511: `/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface
512: created in *domain0*. This can be used to automatically configure the
513: xvif?.? interfaces in *domain0*. In our example, these will be bridged
514: with the bridge0 device in *domain0*, but the bridge has to exist first.
515: To do this, create the file `/etc/ifconfig.bridge0` and make it look
516: like this:
517:
518: create
519: !brconfig $int add ex0 up
520:
521: (replace `ex0` with the name of your physical interface). Then bridge0
522: will be created on boot. See the bridge(4) man page for details.
523:
524: So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a
525: working vif-bridge is also provided with xentools20) configuring:
526:
527: #!/bin/sh
528: #============================================================================
529: # $NetBSD: howto.mdwn,v 1.32 2014/12/24 15:31:36 gdt Exp $
530: #
531: # /usr/pkg/etc/xen/vif-bridge
532: #
533: # Script for configuring a vif in bridged mode with a dom0 interface.
534: # The xend(8) daemon calls a vif script when bringing a vif up or down.
535: # The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp
536: # in the ``vif-script'' field.
537: #
538: # Usage: vif-bridge up|down [var=value ...]
539: #
540: # Actions:
541: # up Adds the vif interface to the bridge.
542: # down Removes the vif interface from the bridge.
543: #
544: # Variables:
545: # domain name of the domain the interface is on (required).
546: # vifq vif interface name (required).
547: # mac vif MAC address (required).
548: # bridge bridge to add the vif to (required).
549: #
550: # Example invocation:
551: #
552: # vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0
553: #
554: #============================================================================
555:
556: # Exit if anything goes wrong
557: set -e
558:
559: echo "vif-bridge $*"
560:
561: # Operation name.
562: OP=$1; shift
563:
564: # Pull variables in args into environment
565: for arg ; do export "${arg}" ; done
566:
567: # Required parameters. Fail if not set.
568: domain=${domain:?}
569: vif=${vif:?}
570: mac=${mac:?}
571: bridge=${bridge:?}
572:
573: # Optional parameters. Set defaults.
574: ip=${ip:-''} # default to null (do nothing)
575:
576: # Are we going up or down?
577: case $OP in
578: up) brcmd='add' ;;
579: down) brcmd='delete' ;;
580: *)
581: echo 'Invalid command: ' $OP
582: echo 'Valid commands are: up, down'
583: exit 1
584: ;;
585: esac
586:
587: # Don't do anything if the bridge is "null".
588: if [ "${bridge}" = "null" ] ; then
589: exit
590: fi
591:
592: # Don't do anything if the bridge doesn't exist.
593: if ! ifconfig -l | grep "${bridge}" >/dev/null; then
594: exit
595: fi
596:
597: # Add/remove vif to/from bridge.
598: ifconfig x${vif} $OP
599: brconfig ${bridge} ${brcmd} x${vif}
600:
601: Now, running
602:
603: xm create -c /usr/pkg/etc/xen/nbsd
604:
605: should create a domain and load a NetBSD kernel in it. (Note: `-c`
606: causes xm to connect to the domain's console once created.) The kernel
607: will try to find its root file system on xbd0 (i.e., wd0e) which hasn't
608: been created yet. wd0e will be seen as a disk device in the new domain,
609: so it will be 'sub-partitioned'. We could attach a ccd to wd0e in
610: *domain0* and partition it, newfs and extract the NetBSD/i386 or amd64
611: tarballs there, but there's an easier way: load the
612: `netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets.
613: Like other install kernels, it contains a ramdisk with sysinst, so you
614: can install NetBSD using sysinst on your new domain.
615:
616: If you want to install NetBSD/Xen with a CDROM image, the following line
617: should be used in the `/usr/pkg/etc/xen/nbsd` file:
618:
619: disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]
620:
621: After booting the domain, the option to install via CDROM may be
622: selected. The CDROM device should be changed to `xbd1d`.
623:
624: Once done installing, `halt -p` the new domain (don't reboot or halt, it
625: would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the
626: config file), switch the config file back to the XEN3\_DOMU kernel, and
627: start the new domain again. Now it should be able to use `root on xbd0a`
628: and you should have a second, functional NetBSD system on your xen
629: installation.
630:
631: When the new domain is booting you'll see some warnings about *wscons*
632: and the pseudo-terminals. These can be fixed by editing the files
633: `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in
634: `/etc/ttys`, except *console*, like this:
635:
636: console "/usr/libexec/getty Pc" vt100 on secure
637: ttyE0 "/usr/libexec/getty Pc" vt220 off secure
638: ttyE1 "/usr/libexec/getty Pc" vt220 off secure
639: ttyE2 "/usr/libexec/getty Pc" vt220 off secure
640: ttyE3 "/usr/libexec/getty Pc" vt220 off secure
641:
642: Finally, all screens must be commented out from `/etc/wscons.conf`.
643:
644: It is also desirable to add
645:
646: powerd=YES
647:
648: in rc.conf. This way, the domain will be properly shut down if
649: `xm shutdown -R` or `xm shutdown -H` is used on the domain0.
650:
651: Your domain should be now ready to work, enjoy.
652:
653: Creating an unprivileged Linux domain (domU)
654: --------------------------------------------
655:
656: Creating unprivileged Linux domains isn't much different from
657: unprivileged NetBSD domains, but there are some details to know.
658:
659: First, the second parameter passed to the disk declaration (the '0x1' in
660: the example below)
661:
662: disk = [ 'phy:/dev/wd0e,0x1,w' ]
663:
664: does matter to Linux. It wants a Linux device number here (e.g. 0x300
665: for hda). Linux builds device numbers as: (major \<\< 8 + minor). So,
666: hda1 which has major 3 and minor 1 on a Linux system will have device
667: number 0x301. Alternatively, devices names can be used (hda, hdb, ...)
668: as xentools has a table to map these names to devices numbers. To export
669: a partition to a Linux guest we can use:
670:
671: disk = [ 'phy:/dev/wd0e,0x300,w' ]
672: root = "/dev/hda1 ro"
673:
674: and it will appear as /dev/hda on the Linux system, and be used as root
675: partition.
676:
677: To install the Linux system on the partition to be exported to the guest
678: domain, the following method can be used: install sysutils/e2fsprogs
679: from pkgsrc. Use mke2fs to format the partition that will be the root
680: partition of your Linux domain, and mount it. Then copy the files from a
681: working Linux system, make adjustments in `/etc` (fstab, network
682: config). It should also be possible to extract binary packages such as
683: .rpm or .deb directly to the mounted partition using the appropriate
684: tool, possibly running under NetBSD's Linux emulation. Once the
685: filesystem has been populated, umount it. If desirable, the filesystem
686: can be converted to ext3 using tune2fs -j. It should now be possible to
687: boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels
688: available in the Xen binary distribution.
689:
690: To get the linux console right, you need to add:
691:
692: extra = "xencons=tty1"
693:
694: to your configuration since not all linux distributions auto-attach a
695: tty to the xen console.
696:
697: Creating an unprivileged Solaris domain (domU)
698: ----------------------------------------------
699:
700: Download an Opensolaris [release](http://opensolaris.org/os/downloads/)
701: or [development snapshot](http://genunix.org/) DVD image. Attach the DVD
702: image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem
703: image to your dom0 filesystem.
704:
705: dom0# mkdir /root/solaris
706: dom0# vnconfig vnd0 osol-1002-124-x86.iso
707: dom0# mount /dev/vnd0a /mnt
708:
709: ## for a 64-bit guest
710: dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris
711: dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris
712:
713: ## for a 32-bit guest
714: dom0# cp /mnt/boot/x86.microroot /root/solaris
715: dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris
716:
717: dom0# umount /mnt
718:
719:
720: Keep the MAN.VND.4 configured. For some reason the boot process stalls
721: unless the DVD image is attached to the guest as a "phy" device. Create
722: an initial configuration file with the following contents. Substitute
723: */dev/wd0k* with an empty partition at least 8 GB large.
724:
725: memory = 640
726: name = 'solaris'
727: disk = [ 'phy:/dev/wd0k,0,w' ]
728: disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ]
729: vif = [ 'bridge=bridge0' ]
730: kernel = '/root/solaris/unix'
731: ramdisk = '/root/solaris/x86.microroot'
732: # for a 64-bit guest
733: extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom'
734: # for a 32-bit guest
735: #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom'
736:
737:
738: Start the guest.
739:
740: dom0# xm create -c solaris.cfg
741: Started domain solaris
742: v3.3.2 chgset 'unavailable'
743: SunOS Release 5.11 Version snv_124 64-bit
744: Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
745: Use is subject to license terms.
746: Hostname: opensolaris
747: Remounting root read/write
748: Probing for device nodes ...
749: WARNING: emlxs: ddi_modopen drv/fct failed: err 2
750: Preparing live image for use
751: Done mounting Live image
752:
753:
754: Make sure the network is configured. Note that it can take a minute for
755: the xnf0 interface to appear.
756:
757: opensolaris console login: jack
758: Password: jack
759: Sun Microsystems Inc. SunOS 5.11 snv_124 November 2008
760: jack@opensolaris:~$ pfexec sh
761: sh-3.2# ifconfig -a
762: sh-3.2# exit
763:
764:
765: Set a password for VNC and start the VNC server which provides the X11
766: display where the installation program runs.
767:
768: jack@opensolaris:~$ vncpasswd
769: Password: solaris
770: Verify: solaris
771: jack@opensolaris:~$ cp .Xclients .vnc/xstartup
772: jack@opensolaris:~$ vncserver :1
773:
774:
775: From a remote machine connect to the VNC server. Use `ifconfig xnf0` on
776: the guest to find the correct IP address to use.
777:
778: remote$ vncviewer 172.18.2.99:1
779:
780:
781: It is also possible to launch the installation on a remote X11 display.
782:
783: jack@opensolaris:~$ export DISPLAY=172.18.1.1:0
784: jack@opensolaris:~$ pfexec gui-install
785:
786:
787: After the GUI installation is complete you will be asked to reboot.
788: Before that you need to determine the ZFS ID for the new boot filesystem
789: and update the configuration file accordingly. Return to the guest
790: console.
791:
792: jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs
793: bootfs = 43
794: ^C
795: jack@opensolaris:~$
796:
797:
798: The final configuration file should look like this. Note in particular
799: the last line.
800:
801: memory = 640
802: name = 'solaris'
803: disk = [ 'phy:/dev/wd0k,0,w' ]
804: vif = [ 'bridge=bridge0' ]
805: kernel = '/root/solaris/unix'
806: ramdisk = '/root/solaris/x86.microroot'
807: extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"'
808:
809:
810: Restart the guest to verify it works correctly.
811:
812: dom0# xm destroy solaris
813: dom0# xm create -c solaris.cfg
814: Using config file "./solaris.cfg".
815: v3.3.2 chgset 'unavailable'
816: Started domain solaris
817: SunOS Release 5.11 Version snv_124 64-bit
818: Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
819: Use is subject to license terms.
820: WARNING: emlxs: ddi_modopen drv/fct failed: err 2
821: Hostname: osol
822: Configuring devices.
823: Loading smf(5) service descriptions: 160/160
824: svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .
825: Reading ZFS config: done.
826: Mounting ZFS filesystems: (6/6)
827: Creating new rsa public/private host key pair
828: Creating new dsa public/private host key pair
829:
830: osol console login:
831:
832:
833: Using PCI devices in guest domains
834: ----------------------------------
835:
836: The domain0 can give other domains access to selected PCI devices. This
837: can allow, for example, a non-privileged domain to have access to a
838: physical network interface or disk controller. However, keep in mind
839: that giving a domain access to a PCI device most likely will give the
840: domain read/write access to the whole physical memory, as PCs don't have
841: an IOMMU to restrict memory access to DMA-capable device. Also, it's not
842: possible to export ISA devices to non-domain0 domains (which means that
843: the primary VGA adapter can't be exported. A guest domain trying to
844: access the VGA registers will panic).
845:
846: This functionality is only available in NetBSD-5.1 (and later) domain0
847: and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as
848: support has not been ported to later versions at this time.
849:
850: For a PCI device to be exported to a domU, is has to be attached to the
851: `pciback` driver in domain0. Devices passed to the domain0 via the
852: pciback.hide boot parameter will attach to `pciback` instead of the
853: usual driver. The list of devices is specified as `(bus:dev.func)`,
854: where bus and dev are 2-digit hexadecimal numbers, and func a
855: single-digit number:
856:
857: pciback.hide=(00:0a.0)(00:06.0)
858:
859: pciback devices should show up in the domain0's boot messages, and the
860: devices should be listed in the `/kern/xen/pci` directory.
861:
862: PCI devices to be exported to a domU are listed in the `pci` array of
863: the domU's config file, with the format `'0000:bus:dev.func'`
864:
865: pci = [ '0000:00:06.0', '0000:00:0a.0' ]
866:
867: In the domU an `xpci` device will show up, to which one or more pci
868: busses will attach. Then the PCI drivers will attach to PCI busses as
869: usual. Note that the default NetBSD DOMU kernels do not have `xpci` or
870: any PCI drivers built in by default; you have to build your own kernel
871: to use PCI devices in a domU. Here's a kernel config example:
872:
873: include "arch/i386/conf/XEN3_DOMU"
874: #include "arch/i386/conf/XENU" # in NetBSD 3.0
875:
876: # Add support for PCI busses to the XEN3_DOMU kernel
877: xpci* at xenbus ?
878: pci* at xpci ?
879:
880: # Now add PCI and related devices to be used by this domain
881: # USB Controller and Devices
882:
883: # PCI USB controllers
884: uhci* at pci? dev ? function ? # Universal Host Controller (Intel)
885:
886: # USB bus support
887: usb* at uhci?
888:
889: # USB Hubs
890: uhub* at usb?
891: uhub* at uhub? port ? configuration ? interface ?
892:
893: # USB Mass Storage
894: umass* at uhub? port ? configuration ? interface ?
895: wd* at umass?
896: # SCSI controllers
897: ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI
898:
899: # SCSI bus support (for both ahc and umass)
900: scsibus* at scsi?
901:
902: # SCSI devices
903: sd* at scsibus? target ? lun ? # SCSI disk drives
904: cd* at scsibus? target ? lun ? # SCSI CD-ROM drives
905:
906:
907: NetBSD as a domU in a VPS
908: =========================
909:
910: The bulk of the HOWTO is about using NetBSD as a dom0 on your own
911: hardware. This section explains how to deal with Xen in a domU as a
912: virtual private server where you do not control or have access to the
913: dom0.
914:
915: TODO: Perhaps reference panix, prmgr, amazon as interesting examples.
916:
917: TODO: Somewhere, discuss pvgrub and py-grub to load the domU kernel
918: from the domU filesystem.
CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb