Annotation of wikisrc/ports/xen/howto.mdwn, revision 1.48
1.5 mspo 1: Introduction
1.13 gdt 2: ============
1.1 mspo 3:
4: [![[Xen
1.7 mspo 5: screenshot]](http://www.netbsd.org/gallery/in-Action/hubertf-xens.png)](../../gallery/in-Action/hubertf-xen.png)
1.1 mspo 6:
1.12 gdt 7: Xen is a virtual machine monitor or hypervisor for x86 hardware
8: (i686-class or higher), which supports running multiple guest
9: operating systems on a single physical machine. With Xen, one uses
10: the Xen kernel to control the CPU, memory and console, a dom0
11: operating system which mediates access to other hardware (e.g., disks,
12: network, USB), and one or more domU operating systems which operate in
13: an unprivileged virtualized environment. IO requests from the domU
14: systems are forwarded by the hypervisor (Xen) to the dom0 to be
15: fulfilled.
16:
17: Xen supports two styles of guests. The original is Para-Virtualized
18: (PV) which means that the guest OS does not attempt to access hardware
19: directly, but instead makes hypercalls to the hypervisor. This is
20: analogous to a user-space program making system calls. (The dom0
21: operating system uses PV calls for some functions, such as updating
22: memory mapping page tables, but has direct hardware access for disk
23: and network.) PV guests must be specifically coded for Xen.
24:
25: The more recent style is HVM, which means that the guest does not have
26: code for Xen and need not be aware that it is running under Xen.
27: Attempts to access hardware registers are trapped and emulated. This
28: style is less efficient but can run unmodified guests.
29:
1.29 gdt 30: Generally any amd64 machine will work with Xen and PV guests. In
31: theory i386 computers without amd64 support can be used for Xen <=
32: 4.2, but we have no recent reports of this working (this is a hint).
33: For HVM guests, the VT or VMX cpu feature (Intel) or SVM/HVM/VT
34: (amd64) is needed; "cpuctl identify 0" will show this. TODO: Clean up
35: and check the above features.
1.19 gdt 36:
1.27 jnemeth 37: At boot, the dom0 kernel is loaded as a module with Xen as the kernel.
1.12 gdt 38: The dom0 can start one or more domUs. (Booting is explained in detail
39: in the dom0 section.)
40:
41: NetBSD supports Xen in that it can serve as dom0, be used as a domU,
42: and that Xen kernels and tools are available in pkgsrc. This HOWTO
43: attempts to address both the case of running a NetBSD dom0 on hardware
1.24 gdt 44: and running domUs under it (NetBSD and other), and also running NetBSD
45: as a domU in a VPS.
1.12 gdt 46:
1.20 gdt 47: Some versions of Xen support "PCI passthrough", which means that
48: specific PCI devices can be made available to a specific domU instead
49: of the dom0. This can be useful to let a domU run X11, or access some
50: network interface or other peripheral.
51:
1.12 gdt 52: Prerequisites
1.13 gdt 53: -------------
1.12 gdt 54:
55: Installing NetBSD/Xen is not extremely difficult, but it is more
56: complex than a normal installation of NetBSD.
1.15 gdt 57: In general, this HOWTO is occasionally overly restrictive about how
58: things must be done, guiding the reader to stay on the established
59: path when there are no known good reasons to stray.
1.12 gdt 60:
61: This HOWTO presumes a basic familiarity with the Xen system
1.16 gdt 62: architecture. This HOWTO presumes familiarity with installing NetBSD
63: on i386/amd64 hardware and installing software from pkgsrc.
1.27 jnemeth 64: See also the [Xen website](http://www.xenproject.org/).
1.1 mspo 65:
1.19 gdt 66: History
67: -------
68:
69: NetBSD used to support Xen2; this has been removed.
70:
71: Before NetBSD's native bootloader could support Xen, the use of
72: grub was recommended. If necessary, see the
1.27 jnemeth 73: [old grub information](/ports/xen/howto-grub/).
1.19 gdt 74:
1.15 gdt 75: Versions of Xen and NetBSD
76: ==========================
77:
1.27 jnemeth 78: Most of the installation concepts and instructions are independent
79: of Xen version and NetBSD version. This section gives advice on
80: which version to choose. Versions not in pkgsrc and older unsupported
81: versions of NetBSD are intentionally ignored.
1.15 gdt 82:
83: Xen
84: ---
85:
86: In NetBSD, xen is provided in pkgsrc, via matching pairs of packages
87: xenkernel and xentools. We will refer only to the kernel versions,
88: but note that both packages must be installed together and must have
89: matching versions.
90:
91: xenkernel3 and xenkernel33 provide Xen 3.1 and 3.3. These no longer
1.20 gdt 92: receive security patches and should not be used. Xen 3.1 supports PCI
1.29 gdt 93: passthrough. Xen 3.1 supports non-PAE on i386.
1.15 gdt 94:
95: xenkernel41 provides Xen 4.1. This is no longer maintained by Xen,
96: but as of 2014-12 receives backported security patches. It is a
97: reasonable although trailing-edge choice.
98:
99: xenkernel42 provides Xen 4.2. This is maintained by Xen, but old as
100: of 2014-12.
101:
102: Ideally newer versions of Xen will be added to pkgsrc.
103:
1.26 gdt 104: Note that NetBSD support is called XEN3. It works with 3.1 through
105: 4.2 because the hypercall interface has been stable.
1.20 gdt 106:
1.19 gdt 107: Xen command program
108: -------------------
109:
110: Early Xen used a program called "xm" to manipulate the system from the
111: dom0. Starting in 4.1, a replacement program with similar behavior
1.27 jnemeth 112: called "xl" is provided. In 4.2 and later, "xl" is preferred. 4.4 is
113: the last version that has "xm".
1.19 gdt 114:
1.15 gdt 115: NetBSD
116: ------
117:
118: The netbsd-5, netbsd-6, netbsd-7, and -current branches are all
119: reasonable choices, with more or less the same considerations for
120: non-Xen use. Therefore, netbsd-6 is recommended as the stable version
1.29 gdt 121: of the most recent release for production use. For those wanting to
122: learn Xen or without production stability concerns, netbsd-7 is likely
123: most appropriate.
1.15 gdt 124:
125: As of NetBSD 6, a NetBSD domU will support multiple vcpus. There is
126: no SMP support for NetBSD as dom0. (The dom0 itself doesn't really
127: need SMP; the lack of support is really a problem when using a dom0 as
128: a normal computer.)
129:
1.18 gdt 130: Architecture
131: ------------
132:
1.29 gdt 133: Xen itself can run on i386 or amd64 machines. (Practically, almost
134: any computer where one would want to run Xen supports amd64.) If
135: using an i386 NetBSD kernel for the dom0, PAE is required (PAE
136: versions are built by default). While i386 dom0 works fine, amd64 is
137: recommended as more normal.
138:
139: Xen 4.2 is the last version to support i386 as a host. TODO: Clarify
140: if this is about the CPU having to be amd64, or about the dom0 kernel
141: having to be amd64.
142:
143: One can then run i386 domUs and amd64 domUs, in any combination. If
144: running an i386 NetBSD kernel as a domU, the PAE version is required.
145: (Note that emacs (at least) fails if run on i386 with PAE when built
146: without, and vice versa, presumably due to bugs in the undump code.)
1.18 gdt 147:
1.15 gdt 148: Recommendation
149: --------------
150:
1.18 gdt 151: Therefore, this HOWTO recommends running xenkernel42 (and xentools42),
1.30 gdt 152: xl, the NetBSD 6 stable branch, and to use an amd64 kernel as the
153: dom0. Either the i386 or amd64 of NetBSD may be used as domUs.
1.15 gdt 154:
1.36 gdt 155: Build problems
156: --------------
157:
158: Ideally, all versions of Xen in pkgsrc would build on all versions of
159: NetBSD on both i386 and amd64. However, that isn't the case. Besides
160: aging code and aging compilers, qemu (included in xentools for HVM
161: support) is difficult to build. The following are known to fail:
162:
163: xenkernel3 netbsd-6 i386
164: xentools42 netbsd-6 i386
165:
166: The following are known to work:
167:
168: xenkernel41 netbsd-5 amd64
169: xentools41 netbsd-5 amd64
170: xenkernel41 netbsd-6 i386
171: xentools41 netbsd-6 i386
172:
1.15 gdt 173: NetBSD as a dom0
174: ================
175:
176: NetBSD can be used as a dom0 and works very well. The following
177: sections address installation, updating NetBSD, and updating Xen.
1.19 gdt 178: Note that it doesn't make sense to talk about installing a dom0 OS
179: without also installing Xen itself. We first address installing
180: NetBSD, which is not yet a dom0, and then adding Xen, pivoting the
181: NetBSD install to a dom0 install by just changing the kernel and boot
182: configuration.
1.15 gdt 183:
1.45 gdt 184: For experimenting with Xen, a machine with as little as 1G of RAM and
185: 100G of disk can work. For running many domUs in productions, far
186: more will be needed.
187:
1.15 gdt 188: Styles of dom0 operation
189: ------------------------
190:
191: There are two basic ways to use Xen. The traditional method is for
192: the dom0 to do absolutely nothing other than providing support to some
193: number of domUs. Such a system was probably installed for the sole
194: purpose of hosting domUs, and sits in a server room on a UPS.
195:
196: The other way is to put Xen under a normal-usage computer, so that the
197: dom0 is what the computer would have been without Xen, perhaps a
198: desktop or laptop. Then, one can run domUs at will. Purists will
199: deride this as less secure than the previous approach, and for a
200: computer whose purpose is to run domUs, they are right. But Xen and a
201: dom0 (without domUs) is not meaingfully less secure than the same
202: things running without Xen. One can boot Xen or boot regular NetBSD
203: alternately with little problems, simply refraining from starting the
204: Xen daemons when not running Xen.
205:
206: Note that NetBSD as dom0 does not support multiple CPUs. This will
207: limit the performance of the Xen/dom0 workstation approach.
208:
1.19 gdt 209: Installation of NetBSD
210: ----------------------
1.13 gdt 211:
1.19 gdt 212: First,
1.27 jnemeth 213: [install NetBSD/amd64](/guide/inst/)
1.19 gdt 214: just as you would if you were not using Xen.
215: However, the partitioning approach is very important.
216:
217: If you want to use RAIDframe for the dom0, there are no special issues
218: for Xen. Typically one provides RAID storage for the dom0, and the
1.22 gdt 219: domU systems are unaware of RAID. The 2nd-stage loader bootxx_* skips
220: over a RAID1 header to find /boot from a filesystem within a RAID
221: partition; this is no different when booting Xen.
1.19 gdt 222:
223: There are 4 styles of providing backing storage for the virtual disks
224: used by domUs: raw partitions, LVM, file-backed vnd(4), and SAN,
225:
226: With raw partitions, one has a disklabel (or gpt) partition sized for
227: each virtual disk to be used by the domU. (If you are able to predict
228: how domU usage will evolve, please add an explanation to the HOWTO.
229: Seriously, needs tend to change over time.)
230:
1.27 jnemeth 231: One can use [lvm(8)](/guide/lvm/) to create logical devices to use
232: for domU disks. This is almost as efficient as raw disk partitions
233: and more flexible. Hence raw disk partitions should typically not
234: be used.
1.19 gdt 235:
236: One can use files in the dom0 filesystem, typically created by dd'ing
237: /dev/zero to create a specific size. This is somewhat less efficient,
238: but very convenient, as one can cp the files for backup, or move them
239: between dom0 hosts.
240:
241: Finally, in theory one can place the files backing the domU disks in a
242: SAN. (This is an invitation for someone who has done this to add a
243: HOWTO page.)
1.1 mspo 244:
1.19 gdt 245: Installation of Xen
246: -------------------
1.1 mspo 247:
1.20 gdt 248: In the dom0, install sysutils/xenkernel42 and sysutils/xentools42 from
249: pkgsrc (or another matching pair).
250: See [the pkgsrc
251: documentation](http://www.NetBSD.org/docs/pkgsrc/) for help with pkgsrc.
252:
253: For Xen 3.1, support for HVM guests is in sysutils/xentool3-hvm. More
254: recent versions have HVM support integrated in the main xentools
255: package. It is entirely reasonable to run only PV guests.
256:
257: Next you need to install the selected Xen kernel itself, which is
258: installed by pkgsrc as "/usr/pkg/xen*-kernel/xen.gz". Copy it to /.
259: For debugging, one may copy xen-debug.gz; this is conceptually similar
260: to DIAGNOSTIC and DEBUG in NetBSD. xen-debug.gz is basically only
261: useful with a serial console. Then, place a NetBSD XEN3_DOM0 kernel
262: in /, copied from releasedir/amd64/binary/kernel/netbsd-XEN3_DOM0.gz
263: of a NetBSD build. Both xen and NetBSD may be left compressed. (If
264: using i386, use releasedir/i386/binary/kernel/netbsd-XEN3PAE_DOM0.gz.)
265:
266: In a dom0 kernel, kernfs is mandatory for xend to comunicate with the
267: kernel, so ensure that /kern is in fstab.
268:
269: Because you already installed NetBSD, you have a working boot setup
270: with an MBR bootblock, either bootxx_ffsv1 or bootxx_ffsv2 at the
271: beginning of your root filesystem, /boot present, and likely
272: /boot.cfg. (If not, fix before continuing!)
273:
274: See boot.cfg(5) for an example. The basic line is
275:
1.37 gdt 276: menu=Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot /xen.gz dom0_mem=256M
1.20 gdt 277:
278: which specifies that the dom0 should have 256M, leaving the rest to be
1.37 gdt 279: allocated for domUs. In an attempt to add performance, one can also
280: add
281:
282: dom0_max_vcpus=1 dom0_vcpus_pin
283:
284: to force only one vcpu to be provided (since NetBSD dom0 can't use
285: more) and to pin that vcpu to a physical cpu. TODO: benchmark this.
1.20 gdt 286:
287: As with non-Xen systems, you should have a line to boot /netbsd (a
288: kernel that works without Xen) and fallback versions of the non-Xen
289: kernel, Xen, and the dom0 kernel.
1.1 mspo 290:
1.28 gdt 291: The [HowTo on Installing into
292: RAID-1](http://mail-index.NetBSD.org/port-xen/2006/03/01/0010.html)
293: explains how to set up booting a dom0 with Xen using grub with
294: NetBSD's RAIDframe. (This is obsolete with the use of NetBSD's native
295: boot.)
296:
1.21 gdt 297: Configuring Xen
298: ---------------
299:
300: Now, you have a system that will boot Xen and the dom0 kernel, and
301: just run the dom0 kernel. There will be no domUs, and none can be
1.31 gdt 302: started because you still have to configure the dom0 tools. The
303: daemons which should be run vary with Xen version and with whether one
304: is using xm or xl. Note that xend is for supporting "xm", and should
305: only be used if you plan on using "xm". Do NOT enable xend if you
306: plan on using "xl" as it will cause problems.
1.21 gdt 307:
1.43 gdt 308: The installation of NetBSD should already have created devices for xen
309: (xencons, xenevt), but if they are not present, create them:
310:
311: cd /dev && sh MAKEDEV xen
312:
1.31 gdt 313: TODO: Give 3.1 advice (or remove it from pkgsrc).
314:
315: For 3.3 (and thus xm), add to rc.conf (but note that you should have
316: installed 4.1 or 4.2):
317:
1.32 gdt 318: xend=YES
319: xenbackendd=YES
1.31 gdt 320:
1.33 gdt 321: For 4.1 (and thus xm; xl is believed not to work well), add to rc.conf:
1.31 gdt 322:
323: xend=YES
324: xencommons=YES
325:
326: TODO: Explain why if xm is preferred on 4.1, rc.d/xendomains has xl.
1.33 gdt 327: Or fix the package.
1.31 gdt 328:
1.33 gdt 329: For 4.2 with xm, add to rc.conf
330:
331: xend=YES
332: xencommons=YES
333:
334: For 4.2 with xl (preferred), add to rc.conf:
1.31 gdt 335:
336: TODO: explain if there is a xend replacement
337: xencommons=YES
338:
339: TODO: Recommend for/against xen-watchdog.
1.27 jnemeth 340:
1.43 gdt 341: After you have configured the daemons and either started them or
1.42 gdt 342: rebooted, run the following (or use xl) to inspect Xen's boot
343: messages, available resources, and running domains:
1.34 gdt 344:
1.43 gdt 345: # xm dmesg
346: [xen's boot info]
347: # xm info
348: [available memory, etc.]
349: # xm list
350: Name Id Mem(MB) CPU State Time(s) Console
351: Domain-0 0 64 0 r---- 58.1
1.33 gdt 352:
1.41 gdt 353: anita (for testing NetBSD)
354: --------------------------
355:
356: With the setup so far, one should be able to run anita (see
357: pkgsrc/sysutils/py-anita) to test NetBSD releases, by doing (as root,
358: because anita must create a domU):
359:
360: anita --vmm=xm test file:///usr/obj/i386/
361:
362: Alternatively, one can use --vmm=xl to use xl-based domU creation instead.
363: TODO: check this.
364:
1.40 gdt 365: Xen-specific NetBSD issues
366: --------------------------
367:
368: There are (at least) two additional things different about NetBSD as a
369: dom0 kernel compared to hardware.
370:
371: One is that modules are not usable in DOM0 kernels, so one must
372: compile in what's needed. It's not really that modules cannot work,
373: but that modules must be built for XEN3_DOM0 because some of the
374: defines change and the normal module builds don't do this. Basically,
375: enabling Xen changes the kernel ABI, and the module build system
376: doesn't cope with this.
377:
378: The other difference is that XEN3_DOM0 does not have exactly the same
379: options as GENERIC. While it is debatable whether or not this is a
380: bug, users should be aware of this and can simply add missing config
381: items if desired.
382:
1.15 gdt 383: Updating NetBSD in a dom0
384: -------------------------
385:
386: This is just like updating NetBSD on bare hardware, assuming the new
387: version supports the version of Xen you are running. Generally, one
388: replaces the kernel and reboots, and then overlays userland binaries
389: and adjusts /etc.
390:
391: Note that one must update both the non-Xen kernel typically used for
392: rescue purposes and the DOM0 kernel used with Xen.
393:
1.22 gdt 394: To convert from grub to /boot, install an mbr bootblock with fdisk,
395: bootxx_ with installboot, /boot and /boot.cfg. This really should be
396: no different than completely reinstalling boot blocks on a non-Xen
397: system.
398:
1.15 gdt 399: Updating Xen versions
400: ---------------------
401:
1.21 gdt 402: Updating Xen is conceptually not difficult, but can run into all the
403: issues found when installing Xen. Assuming migration from 4.1 to 4.2,
404: remove the xenkernel41 and xentools41 packages and install the
405: xenkernel42 and xentools42 packages. Copy the 4.2 xen.gz to /.
406:
407: Ensure that the contents of /etc/rc.d/xen* are correct. Enable the
408: correct set of daemons. Ensure that the domU config files are valid
409: for the new version.
1.15 gdt 410:
1.28 gdt 411:
412: Unprivileged domains (domU)
413: ===========================
414:
415: This section describes general concepts about domUs. It does not
1.33 gdt 416: address specific domU operating systems or how to install them. The
417: config files for domUs are typically in /usr/pkg/etc/xen, and are
418: typically named so that the file anme, domU name and the domU's host
419: name match.
420:
421: The domU is provided with cpu and memory by Xen, configured by the
422: dom0. The domU is provided with disk and network by the dom0,
423: mediated by Xen, and configured in the dom0.
424:
425: Entropy in domUs can be an issue; physical disks and network are on
426: the dom0. NetBSD's /dev/random system works, but is often challenged.
427:
1.48 ! gdt 428: Config files
! 429: ------------
! 430:
! 431: There is no good order to present config files and the concepts
! 432: surrounding what is being configured. We first show an example config
! 433: file, and then in the various sections give details.
! 434:
! 435: See (at least in xentools41) /usr/pkg/share/examples/xen/xmexample*,
! 436: for a large number of well-commented examples, mostly for running
! 437: GNU/Linux.
! 438:
! 439: The following is an example minimal domain configuration file
! 440: "/usr/pkg/etc/xen/foo". It is (with only a name change) an actual
! 441: known working config file on Xen 4.1 (NetBSD 5 amd64 dom0 and NetBSD 5
! 442: i386 domU). The domU serves as a network file server.
! 443:
! 444: # -*- mode: python; -*-
! 445:
! 446: kernel = "/netbsd-XEN3PAE_DOMU-i386-foo.gz"
! 447: memory = 1024
! 448: vif = [ 'mac=aa:00:00:d1:00:09,bridge=bridge0' ]
! 449: disk = [ 'file:/n0/xen/foo-wd0,0x0,w',
! 450: 'file:/n0/xen/foo-wd1,0x1,w' ]
! 451:
! 452: The domain will have the same name as the file. The kernel has the
! 453: host/domU name in it, so that on the dom0 one can update the various
! 454: domUs independently. The vif line causes an interface to be provided,
! 455: with a specific mac address (do not reuse MAC addresses!), in bridge
! 456: mode. Two disks are provided, and they are both writable; the bits
! 457: are stored in files and Xen attaches them to a vnd(4) device in the
! 458: dom0 on domain creation. The system treates xbd0 as the boot device
! 459: without needing explicit configuration.
! 460:
! 461: By default xm looks for domain config files in /usr/pkg/etc/xen. Note
! 462: that "xm create" takes the name of a config file, while other commands
! 463: take the name of a domain. To create the domain, connect to the
! 464: console, create the domain while attaching the console, shutdown the
! 465: domain, and see if it has finished stopping, do (or xl with Xen >=
! 466: 4.2):
! 467:
! 468: xm create foo
! 469: xm console foo
! 470: xm create -c foo
! 471: xm shutdown foo
! 472: xm list
! 473:
! 474: Typing ^] will exit the console session. Shutting down a domain is
! 475: equivalent to pushing the power button; a NetBSD domU will receive a
! 476: power-press event and do a clean shutdown. Shutting down the dom0
! 477: will trigger controlled shutdowns of all configured domUs.
! 478:
! 479: domU kernels
! 480: ------------
! 481:
! 482: On a physical computer, the BIOS reads sector 0, and a chain of boot
! 483: loaders finds and loads a kernel. Normally this comes from the root
! 484: filesystem. With Xen domUs, the process is totally different. The
! 485: normal path is for the domU kernel to be a file in the dom0's
! 486: filesystem. At the request of the dom0, Xen loads that kernel into a
! 487: new domU instance and starts execution. While domU kernels can be
! 488: anyplace, reasonable places to store domU kernels on the dom0 are in /
! 489: (so they are near the dom0 kernel), in /usr/pkg/etc/xen (near the
! 490: config files), or in /u0/xen (where the vdisks are).
! 491:
! 492: See the VPS section near the end for discussion of alternate ways to
! 493: obtain domU kernels.
! 494:
1.33 gdt 495: CPU and memory
496: --------------
497:
1.48 ! gdt 498: A domain is provided with some number of vcpus, less than the number
! 499: of cpus seen by the hypervisor. (For a dom0, this is controlled by
! 500: the boot argument "dom0_max_vcpus=1".) For a domU, it is controlled
! 501: from the config file by the "vcpus = N" directive.
! 502:
! 503: A domain is provided with memory; this is controlled in the config
! 504: file by "memory = N" (in megabytes). In the straightforward case, the
! 505: sum of the the memory allocated to the dom0 and all domUs must be less
1.33 gdt 506: than the available memory.
507:
508: Xen also provides a "balloon" driver, which can be used to let domains
509: use more memory temporarily. TODO: Explain better, and explain how
510: well it works with NetBSD.
1.28 gdt 511:
512: Virtual disks
513: -------------
514:
1.33 gdt 515: With the file/vnd style, typically one creates a directory,
516: e.g. /u0/xen, on a disk large enough to hold virtual disks for all
517: domUs. Then, for each domU disk, one writes zeros to a file that then
518: serves to hold the virtual disk's bits; a suggested name is foo-xbd0
519: for the first virtual disk for the domU called foo. Writing zeros to
520: the file serves two purposes. One is that preallocating the contents
521: improves performance. The other is that vnd on sparse files has
522: failed to work. TODO: give working/notworking NetBSD versions for
523: sparse vnd. Note that the use of file/vnd for Xen is not really
524: different than creating a file-backed virtual disk for some other
1.39 gdt 525: purpose, except that xentools handles the vnconfig commands. To
526: create an empty 4G virtual disk, simply do
527:
528: dd if=/dev/zero of=foo-xbd0 bs=1m count=4096
1.33 gdt 529:
530: With the lvm style, one creates logical devices. They are then used
1.48 ! gdt 531: similarly to vnds. TODO: Add an example with lvm.
! 532:
! 533: In domU config files, the disks are defined as a sequence of 3-tuples.
! 534: The first element is "method:/path/to/disk". Common methods are
! 535: "file:" for file-backed vnd. and "phy:" for something that is already
! 536: a (TODO: character or block) device.
! 537:
! 538: The second element is an artifact of how virtual disks are passed to
! 539: Linux, and a source of confusion with NetBSD Xen usage. Linux domUs
! 540: are given a device name to associate with the disk, and values like
! 541: "hda1" or "sda1" are common. In a NetBSD domU, the first disk appears
! 542: as xbd0, the second as xbd1, and so on. However, xm/xl demand a
! 543: second argument. The name given is converted to a major/minor by
! 544: consulting /dev and this is passed to the domU (TODO: check this). In
! 545: the general case, the dom0 and domU can be different operating
! 546: systems, and it is an unwarranted assumption that they have consistent
! 547: numbering in /dev, or even that the dom0 OS has a /dev. With NetBSD
! 548: as both dom0 and domU, using values of 0x0 for the first disk and 0x1
! 549: for the second works fine and avoids this issue.
! 550:
! 551: The third element is "w" for writable disks, and "r" for read-only
! 552: disks.
1.28 gdt 553:
554: Virtual Networking
555: ------------------
556:
1.46 gdt 557: Xen provides virtual ethernets, each of which connects the dom0 and a
558: domU. For each virtual network, there is an interface "xvifN.M" in
559: the dom0, and in domU index N, a matching interface xennetM (NetBSD
560: name). The interfaces behave as if there is an Ethernet with two
561: adaptors connected. From this primitive, one can construct various
562: configurations. We focus on two common and useful cases for which
563: there are existing scripts: bridging and NAT.
1.28 gdt 564:
1.48 ! gdt 565: With bridging (in the example above), the domU perceives itself to be
! 566: on the same network as the dom0. For server virtualization, this is
! 567: usually best. Bridging is accomplished by creating a bridge(4) device
! 568: and adding the dom0's physical interface and the various xvifN.0
! 569: interfaces to the bridge. One specifies "bridge=bridge0" in the domU
! 570: config file. The bridge must be set up already in the dom0; an
! 571: example /etc/ifconfig.bridge0 is:
1.46 gdt 572:
573: create
574: up
575: !brconfig bridge0 add wm0
1.28 gdt 576:
577: With NAT, the domU perceives itself to be behind a NAT running on the
578: dom0. This is often appropriate when running Xen on a workstation.
1.48 ! gdt 579: TODO: NAT appears to be configured by "vif = [ '' ]".
1.28 gdt 580:
1.33 gdt 581: Sizing domains
582: --------------
583:
584: Modern x86 hardware has vast amounts of resources. However, many
585: virtual servers can function just fine on far less. A system with
586: 256M of RAM and a 4G disk can be a reasonable choice. Note that it is
587: far easier to adjust virtual resources than physical ones. For
588: memory, it's just a config file edit and a reboot. For disk, one can
589: create a new file and vnconfig it (or lvm), and then dump/restore,
590: just like updating physical disks, but without having to be there and
591: without those pesky connectors.
592:
1.48 ! gdt 593: Starting domains automatically
! 594: ------------------------------
1.28 gdt 595:
1.48 ! gdt 596: To start domains foo at bar at boot and shut them down cleanly on dom0
! 597: shutdown, in rc.conf add:
1.28 gdt 598:
1.48 ! gdt 599: xendomains="foo bar"
1.28 gdt 600:
601: TODO: Explain why 4.1 rc.d/xendomains has xl, when one should use xm
1.48 ! gdt 602: on 4.1. Or fix the xentools41 package to have xm
1.28 gdt 603:
604: Creating specific unprivileged domains (domU)
605: =============================================
1.14 gdt 606:
607: Creating domUs is almost entirely independent of operating system. We
608: first explain NetBSD, and then differences for Linux and Solaris.
1.43 gdt 609: Note that you must have already completed the dom0 setup so that "xm
610: list" (or "xl list") works.
1.14 gdt 611:
612: Creating an unprivileged NetBSD domain (domU)
613: ---------------------------------------------
1.1 mspo 614:
615: 'xm create' allows you to create a new domain. It uses a config file in
616: PKG\_SYSCONFDIR for its parameters. By default, this file will be in
1.5 mspo 617: `/usr/pkg/etc/xen/`. On creation, a kernel has to be specified, which
618: will be executed in the new domain (this kernel is in the *domain0* file
619: system, not on the new domain virtual disk; but please note, you should
620: install the same kernel into *domainU* as `/netbsd` in order to make
1.27 jnemeth 621: your system tools, like savecore(8), work). A suitable kernel is
1.5 mspo 622: provided as part of the i386 and amd64 binary sets: XEN3\_DOMU.
1.1 mspo 623:
624: Here is an /usr/pkg/etc/xen/nbsd example config file:
625:
1.3 mspo 626: # -*- mode: python; -*-
627: #============================================================================
628: # Python defaults setup for 'xm create'.
629: # Edit this file to reflect the configuration of your system.
630: #============================================================================
1.5 mspo 631:
1.3 mspo 632: #----------------------------------------------------------------------------
633: # Kernel image file. This kernel will be loaded in the new domain.
634: kernel = "/home/bouyer/netbsd-XEN3_DOMU"
635: #kernel = "/home/bouyer/netbsd-INSTALL_XEN3_DOMU"
1.5 mspo 636:
1.3 mspo 637: # Memory allocation (in megabytes) for the new domain.
638: memory = 128
1.5 mspo 639:
1.3 mspo 640: # A handy name for your new domain. This will appear in 'xm list',
641: # and you can use this as parameters for xm in place of the domain
642: # number. All domains must have different names.
643: #
644: name = "nbsd"
1.5 mspo 645:
1.3 mspo 646: # The number of virtual CPUs this domain has.
647: #
648: vcpus = 1
1.5 mspo 649:
1.3 mspo 650: #----------------------------------------------------------------------------
651: # Define network interfaces for the new domain.
1.5 mspo 652:
1.3 mspo 653: # Number of network interfaces (must be at least 1). Default is 1.
654: nics = 1
1.5 mspo 655:
1.3 mspo 656: # Define MAC and/or bridge for the network interfaces.
657: #
658: # The MAC address specified in ``mac'' is the one used for the interface
659: # in the new domain. The interface in domain0 will use this address XOR'd
660: # with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random
661: # MACs are assigned if not given.
662: #
663: # ``bridge'' is a required parameter, which will be passed to the
664: # vif-script called by xend(8) when a new domain is created to configure
665: # the new xvif interface in domain0.
666: #
667: # In this example, the xvif is added to bridge0, which should have been
668: # set up prior to the new domain being created -- either in the
669: # ``network'' script or using a /etc/ifconfig.bridge0 file.
670: #
671: vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ]
1.5 mspo 672:
1.3 mspo 673: #----------------------------------------------------------------------------
674: # Define the disk devices you want the domain to have access to, and
675: # what you want them accessible as.
676: #
677: # Each disk entry is of the form:
678: #
1.5 mspo 679: # phy:DEV,VDEV,MODE
1.3 mspo 680: #
681: # where DEV is the device, VDEV is the device name the domain will see,
682: # and MODE is r for read-only, w for read-write. You can also create
683: # file-backed domains using disk entries of the form:
684: #
1.5 mspo 685: # file:PATH,VDEV,MODE
1.3 mspo 686: #
687: # where PATH is the path to the file used as the virtual disk, and VDEV
688: # and MODE have the same meaning as for ``phy'' devices.
689: #
690: # VDEV doesn't really matter for a NetBSD guest OS (it's just used as an index),
691: # but it does for Linux.
692: # Worse, the device has to exist in /dev/ of domain0, because xm will
693: # try to stat() it. This means that in order to load a Linux guest OS
694: # from a NetBSD domain0, you'll have to create /dev/hda1, /dev/hda2, ...
695: # on domain0, with the major/minor from Linux :(
696: # Alternatively it's possible to specify the device number in hex,
697: # e.g. 0x301 for /dev/hda1, 0x302 for /dev/hda2, etc ...
1.5 mspo 698:
1.3 mspo 699: disk = [ 'phy:/dev/wd0e,0x1,w' ]
700: #disk = [ 'file:/var/xen/nbsd-disk,0x01,w' ]
701: #disk = [ 'file:/var/xen/nbsd-disk,0x301,w' ]
1.5 mspo 702:
1.3 mspo 703: #----------------------------------------------------------------------------
704: # Set the kernel command line for the new domain.
1.5 mspo 705:
1.3 mspo 706: # Set root device. This one does matter for NetBSD
707: root = "xbd0"
708: # extra parameters passed to the kernel
709: # this is where you can set boot flags like -s, -a, etc ...
710: #extra = ""
1.5 mspo 711:
1.3 mspo 712: #----------------------------------------------------------------------------
713: # Set according to whether you want the domain restarted when it exits.
714: # The default is False.
715: #autorestart = True
1.5 mspo 716:
1.3 mspo 717: # end of nbsd config file ====================================================
1.1 mspo 718:
719: When a new domain is created, xen calls the
1.5 mspo 720: `/usr/pkg/etc/xen/vif-bridge` script for each virtual network interface
721: created in *domain0*. This can be used to automatically configure the
722: xvif?.? interfaces in *domain0*. In our example, these will be bridged
723: with the bridge0 device in *domain0*, but the bridge has to exist first.
724: To do this, create the file `/etc/ifconfig.bridge0` and make it look
725: like this:
1.1 mspo 726:
1.3 mspo 727: create
728: !brconfig $int add ex0 up
1.1 mspo 729:
1.5 mspo 730: (replace `ex0` with the name of your physical interface). Then bridge0
1.27 jnemeth 731: will be created on boot. See the bridge(4) man page for details.
1.1 mspo 732:
1.5 mspo 733: So, here is a suitable `/usr/pkg/etc/xen/vif-bridge` for xvif?.? (a
734: working vif-bridge is also provided with xentools20) configuring:
1.1 mspo 735:
1.5 mspo 736: #!/bin/sh
1.3 mspo 737: #============================================================================
1.48 ! gdt 738: # $NetBSD: howto.mdwn,v 1.47 2014/12/26 18:35:45 gdt Exp $
1.3 mspo 739: #
740: # /usr/pkg/etc/xen/vif-bridge
741: #
742: # Script for configuring a vif in bridged mode with a dom0 interface.
743: # The xend(8) daemon calls a vif script when bringing a vif up or down.
744: # The script name to use is defined in /usr/pkg/etc/xen/xend-config.sxp
745: # in the ``vif-script'' field.
746: #
747: # Usage: vif-bridge up|down [var=value ...]
748: #
749: # Actions:
1.5 mspo 750: # up Adds the vif interface to the bridge.
751: # down Removes the vif interface from the bridge.
1.3 mspo 752: #
753: # Variables:
1.5 mspo 754: # domain name of the domain the interface is on (required).
755: # vifq vif interface name (required).
756: # mac vif MAC address (required).
757: # bridge bridge to add the vif to (required).
1.3 mspo 758: #
759: # Example invocation:
760: #
761: # vif-bridge up domain=VM1 vif=xvif1.0 mac="ee:14:01:d0:ec:af" bridge=bridge0
762: #
763: #============================================================================
1.5 mspo 764:
1.3 mspo 765: # Exit if anything goes wrong
766: set -e
1.5 mspo 767:
1.3 mspo 768: echo "vif-bridge $*"
1.5 mspo 769:
1.3 mspo 770: # Operation name.
771: OP=$1; shift
1.5 mspo 772:
1.3 mspo 773: # Pull variables in args into environment
774: for arg ; do export "${arg}" ; done
1.5 mspo 775:
1.3 mspo 776: # Required parameters. Fail if not set.
777: domain=${domain:?}
778: vif=${vif:?}
779: mac=${mac:?}
780: bridge=${bridge:?}
1.5 mspo 781:
1.3 mspo 782: # Optional parameters. Set defaults.
783: ip=${ip:-''} # default to null (do nothing)
1.5 mspo 784:
1.3 mspo 785: # Are we going up or down?
786: case $OP in
1.5 mspo 787: up) brcmd='add' ;;
1.3 mspo 788: down) brcmd='delete' ;;
789: *)
1.5 mspo 790: echo 'Invalid command: ' $OP
791: echo 'Valid commands are: up, down'
792: exit 1
793: ;;
1.3 mspo 794: esac
1.5 mspo 795:
1.3 mspo 796: # Don't do anything if the bridge is "null".
797: if [ "${bridge}" = "null" ] ; then
1.5 mspo 798: exit
1.3 mspo 799: fi
1.5 mspo 800:
1.3 mspo 801: # Don't do anything if the bridge doesn't exist.
802: if ! ifconfig -l | grep "${bridge}" >/dev/null; then
1.5 mspo 803: exit
1.3 mspo 804: fi
1.5 mspo 805:
1.3 mspo 806: # Add/remove vif to/from bridge.
807: ifconfig x${vif} $OP
808: brconfig ${bridge} ${brcmd} x${vif}
1.1 mspo 809:
810: Now, running
811:
1.3 mspo 812: xm create -c /usr/pkg/etc/xen/nbsd
1.1 mspo 813:
1.5 mspo 814: should create a domain and load a NetBSD kernel in it. (Note: `-c`
815: causes xm to connect to the domain's console once created.) The kernel
816: will try to find its root file system on xbd0 (i.e., wd0e) which hasn't
817: been created yet. wd0e will be seen as a disk device in the new domain,
818: so it will be 'sub-partitioned'. We could attach a ccd to wd0e in
819: *domain0* and partition it, newfs and extract the NetBSD/i386 or amd64
820: tarballs there, but there's an easier way: load the
821: `netbsd-INSTALL_XEN3_DOMU` kernel provided in the NetBSD binary sets.
822: Like other install kernels, it contains a ramdisk with sysinst, so you
823: can install NetBSD using sysinst on your new domain.
1.1 mspo 824:
825: If you want to install NetBSD/Xen with a CDROM image, the following line
1.5 mspo 826: should be used in the `/usr/pkg/etc/xen/nbsd` file:
1.1 mspo 827:
1.3 mspo 828: disk = [ 'phy:/dev/wd0e,0x1,w', 'phy:/dev/cd0a,0x2,r' ]
1.1 mspo 829:
830: After booting the domain, the option to install via CDROM may be
1.5 mspo 831: selected. The CDROM device should be changed to `xbd1d`.
1.1 mspo 832:
1.5 mspo 833: Once done installing, `halt -p` the new domain (don't reboot or halt, it
834: would reload the INSTALL\_XEN3\_DOMU kernel even if you changed the
1.1 mspo 835: config file), switch the config file back to the XEN3\_DOMU kernel, and
1.5 mspo 836: start the new domain again. Now it should be able to use `root on xbd0a`
837: and you should have a second, functional NetBSD system on your xen
838: installation.
1.1 mspo 839:
840: When the new domain is booting you'll see some warnings about *wscons*
841: and the pseudo-terminals. These can be fixed by editing the files
1.5 mspo 842: `/etc/ttys` and `/etc/wscons.conf`. You must disable all terminals in
843: `/etc/ttys`, except *console*, like this:
1.1 mspo 844:
1.3 mspo 845: console "/usr/libexec/getty Pc" vt100 on secure
846: ttyE0 "/usr/libexec/getty Pc" vt220 off secure
847: ttyE1 "/usr/libexec/getty Pc" vt220 off secure
848: ttyE2 "/usr/libexec/getty Pc" vt220 off secure
849: ttyE3 "/usr/libexec/getty Pc" vt220 off secure
1.1 mspo 850:
1.5 mspo 851: Finally, all screens must be commented out from `/etc/wscons.conf`.
1.1 mspo 852:
853: It is also desirable to add
854:
1.3 mspo 855: powerd=YES
1.1 mspo 856:
1.5 mspo 857: in rc.conf. This way, the domain will be properly shut down if
858: `xm shutdown -R` or `xm shutdown -H` is used on the domain0.
1.1 mspo 859:
860: Your domain should be now ready to work, enjoy.
861:
1.14 gdt 862: Creating an unprivileged Linux domain (domU)
1.5 mspo 863: --------------------------------------------
1.1 mspo 864:
865: Creating unprivileged Linux domains isn't much different from
866: unprivileged NetBSD domains, but there are some details to know.
867:
868: First, the second parameter passed to the disk declaration (the '0x1' in
869: the example below)
870:
1.3 mspo 871: disk = [ 'phy:/dev/wd0e,0x1,w' ]
1.1 mspo 872:
873: does matter to Linux. It wants a Linux device number here (e.g. 0x300
874: for hda). Linux builds device numbers as: (major \<\< 8 + minor). So,
875: hda1 which has major 3 and minor 1 on a Linux system will have device
876: number 0x301. Alternatively, devices names can be used (hda, hdb, ...)
877: as xentools has a table to map these names to devices numbers. To export
878: a partition to a Linux guest we can use:
879:
1.3 mspo 880: disk = [ 'phy:/dev/wd0e,0x300,w' ]
881: root = "/dev/hda1 ro"
1.1 mspo 882:
883: and it will appear as /dev/hda on the Linux system, and be used as root
884: partition.
885:
886: To install the Linux system on the partition to be exported to the guest
887: domain, the following method can be used: install sysutils/e2fsprogs
888: from pkgsrc. Use mke2fs to format the partition that will be the root
889: partition of your Linux domain, and mount it. Then copy the files from a
1.5 mspo 890: working Linux system, make adjustments in `/etc` (fstab, network
891: config). It should also be possible to extract binary packages such as
892: .rpm or .deb directly to the mounted partition using the appropriate
893: tool, possibly running under NetBSD's Linux emulation. Once the
894: filesystem has been populated, umount it. If desirable, the filesystem
895: can be converted to ext3 using tune2fs -j. It should now be possible to
896: boot the Linux guest domain, using one of the vmlinuz-\*-xenU kernels
897: available in the Xen binary distribution.
1.1 mspo 898:
899: To get the linux console right, you need to add:
900:
1.3 mspo 901: extra = "xencons=tty1"
1.1 mspo 902:
903: to your configuration since not all linux distributions auto-attach a
904: tty to the xen console.
905:
1.14 gdt 906: Creating an unprivileged Solaris domain (domU)
1.5 mspo 907: ----------------------------------------------
1.1 mspo 908:
909: Download an Opensolaris [release](http://opensolaris.org/os/downloads/)
910: or [development snapshot](http://genunix.org/) DVD image. Attach the DVD
1.5 mspo 911: image to a MAN.VND.4 device. Copy the kernel and ramdisk filesystem
912: image to your dom0 filesystem.
1.1 mspo 913:
1.3 mspo 914: dom0# mkdir /root/solaris
915: dom0# vnconfig vnd0 osol-1002-124-x86.iso
916: dom0# mount /dev/vnd0a /mnt
1.5 mspo 917:
1.3 mspo 918: ## for a 64-bit guest
919: dom0# cp /mnt/boot/amd64/x86.microroot /root/solaris
920: dom0# cp /mnt/platform/i86xpv/kernel/amd64/unix /root/solaris
1.5 mspo 921:
1.3 mspo 922: ## for a 32-bit guest
923: dom0# cp /mnt/boot/x86.microroot /root/solaris
924: dom0# cp /mnt/platform/i86xpv/kernel/unix /root/solaris
1.5 mspo 925:
1.3 mspo 926: dom0# umount /mnt
1.5 mspo 927:
928:
929: Keep the MAN.VND.4 configured. For some reason the boot process stalls
930: unless the DVD image is attached to the guest as a "phy" device. Create
931: an initial configuration file with the following contents. Substitute
932: */dev/wd0k* with an empty partition at least 8 GB large.
1.1 mspo 933:
1.4 mspo 934: memory = 640
935: name = 'solaris'
936: disk = [ 'phy:/dev/wd0k,0,w' ]
937: disk += [ 'phy:/dev/vnd0d,6:cdrom,r' ]
938: vif = [ 'bridge=bridge0' ]
939: kernel = '/root/solaris/unix'
940: ramdisk = '/root/solaris/x86.microroot'
941: # for a 64-bit guest
942: extra = '/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom'
943: # for a 32-bit guest
944: #extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom'
1.5 mspo 945:
946:
1.1 mspo 947: Start the guest.
948:
1.4 mspo 949: dom0# xm create -c solaris.cfg
950: Started domain solaris
951: v3.3.2 chgset 'unavailable'
952: SunOS Release 5.11 Version snv_124 64-bit
953: Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
954: Use is subject to license terms.
955: Hostname: opensolaris
956: Remounting root read/write
957: Probing for device nodes ...
958: WARNING: emlxs: ddi_modopen drv/fct failed: err 2
959: Preparing live image for use
960: Done mounting Live image
1.5 mspo 961:
1.1 mspo 962:
963: Make sure the network is configured. Note that it can take a minute for
964: the xnf0 interface to appear.
965:
1.4 mspo 966: opensolaris console login: jack
967: Password: jack
968: Sun Microsystems Inc. SunOS 5.11 snv_124 November 2008
969: jack@opensolaris:~$ pfexec sh
970: sh-3.2# ifconfig -a
971: sh-3.2# exit
1.5 mspo 972:
1.1 mspo 973:
974: Set a password for VNC and start the VNC server which provides the X11
975: display where the installation program runs.
976:
1.4 mspo 977: jack@opensolaris:~$ vncpasswd
978: Password: solaris
979: Verify: solaris
980: jack@opensolaris:~$ cp .Xclients .vnc/xstartup
981: jack@opensolaris:~$ vncserver :1
1.5 mspo 982:
1.1 mspo 983:
1.5 mspo 984: From a remote machine connect to the VNC server. Use `ifconfig xnf0` on
985: the guest to find the correct IP address to use.
1.1 mspo 986:
1.4 mspo 987: remote$ vncviewer 172.18.2.99:1
1.5 mspo 988:
1.1 mspo 989:
990: It is also possible to launch the installation on a remote X11 display.
991:
1.4 mspo 992: jack@opensolaris:~$ export DISPLAY=172.18.1.1:0
993: jack@opensolaris:~$ pfexec gui-install
1.5 mspo 994:
1.1 mspo 995:
996: After the GUI installation is complete you will be asked to reboot.
997: Before that you need to determine the ZFS ID for the new boot filesystem
998: and update the configuration file accordingly. Return to the guest
999: console.
1000:
1.4 mspo 1001: jack@opensolaris:~$ pfexec zdb -vvv rpool | grep bootfs
1002: bootfs = 43
1003: ^C
1004: jack@opensolaris:~$
1.5 mspo 1005:
1.1 mspo 1006:
1007: The final configuration file should look like this. Note in particular
1008: the last line.
1009:
1.4 mspo 1010: memory = 640
1011: name = 'solaris'
1012: disk = [ 'phy:/dev/wd0k,0,w' ]
1013: vif = [ 'bridge=bridge0' ]
1014: kernel = '/root/solaris/unix'
1015: ramdisk = '/root/solaris/x86.microroot'
1016: extra = '/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/43,bootpath="/xpvd/xdf@0:a"'
1.5 mspo 1017:
1.1 mspo 1018:
1019: Restart the guest to verify it works correctly.
1020:
1.4 mspo 1021: dom0# xm destroy solaris
1022: dom0# xm create -c solaris.cfg
1023: Using config file "./solaris.cfg".
1024: v3.3.2 chgset 'unavailable'
1025: Started domain solaris
1026: SunOS Release 5.11 Version snv_124 64-bit
1027: Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
1028: Use is subject to license terms.
1029: WARNING: emlxs: ddi_modopen drv/fct failed: err 2
1030: Hostname: osol
1031: Configuring devices.
1032: Loading smf(5) service descriptions: 160/160
1033: svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .
1034: Reading ZFS config: done.
1035: Mounting ZFS filesystems: (6/6)
1036: Creating new rsa public/private host key pair
1037: Creating new dsa public/private host key pair
1.5 mspo 1038:
1.4 mspo 1039: osol console login:
1.5 mspo 1040:
1.1 mspo 1041:
1042: Using PCI devices in guest domains
1.14 gdt 1043: ----------------------------------
1.1 mspo 1044:
1045: The domain0 can give other domains access to selected PCI devices. This
1046: can allow, for example, a non-privileged domain to have access to a
1047: physical network interface or disk controller. However, keep in mind
1048: that giving a domain access to a PCI device most likely will give the
1049: domain read/write access to the whole physical memory, as PCs don't have
1050: an IOMMU to restrict memory access to DMA-capable device. Also, it's not
1051: possible to export ISA devices to non-domain0 domains (which means that
1052: the primary VGA adapter can't be exported. A guest domain trying to
1053: access the VGA registers will panic).
1054:
1055: This functionality is only available in NetBSD-5.1 (and later) domain0
1056: and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as
1057: support has not been ported to later versions at this time.
1058:
1059: For a PCI device to be exported to a domU, is has to be attached to the
1.5 mspo 1060: `pciback` driver in domain0. Devices passed to the domain0 via the
1061: pciback.hide boot parameter will attach to `pciback` instead of the
1062: usual driver. The list of devices is specified as `(bus:dev.func)`,
1063: where bus and dev are 2-digit hexadecimal numbers, and func a
1064: single-digit number:
1.1 mspo 1065:
1.4 mspo 1066: pciback.hide=(00:0a.0)(00:06.0)
1.1 mspo 1067:
1068: pciback devices should show up in the domain0's boot messages, and the
1.5 mspo 1069: devices should be listed in the `/kern/xen/pci` directory.
1.1 mspo 1070:
1.5 mspo 1071: PCI devices to be exported to a domU are listed in the `pci` array of
1072: the domU's config file, with the format `'0000:bus:dev.func'`
1.1 mspo 1073:
1.4 mspo 1074: pci = [ '0000:00:06.0', '0000:00:0a.0' ]
1.1 mspo 1075:
1.5 mspo 1076: In the domU an `xpci` device will show up, to which one or more pci
1077: busses will attach. Then the PCI drivers will attach to PCI busses as
1078: usual. Note that the default NetBSD DOMU kernels do not have `xpci` or
1079: any PCI drivers built in by default; you have to build your own kernel
1080: to use PCI devices in a domU. Here's a kernel config example:
1.1 mspo 1081:
1.4 mspo 1082: include "arch/i386/conf/XEN3_DOMU"
1083: #include "arch/i386/conf/XENU" # in NetBSD 3.0
1.5 mspo 1084:
1.4 mspo 1085: # Add support for PCI busses to the XEN3_DOMU kernel
1086: xpci* at xenbus ?
1087: pci* at xpci ?
1.5 mspo 1088:
1.4 mspo 1089: # Now add PCI and related devices to be used by this domain
1090: # USB Controller and Devices
1.5 mspo 1091:
1.4 mspo 1092: # PCI USB controllers
1093: uhci* at pci? dev ? function ? # Universal Host Controller (Intel)
1.5 mspo 1094:
1.4 mspo 1095: # USB bus support
1096: usb* at uhci?
1.5 mspo 1097:
1.4 mspo 1098: # USB Hubs
1099: uhub* at usb?
1100: uhub* at uhub? port ? configuration ? interface ?
1.5 mspo 1101:
1.4 mspo 1102: # USB Mass Storage
1103: umass* at uhub? port ? configuration ? interface ?
1104: wd* at umass?
1105: # SCSI controllers
1106: ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI
1.5 mspo 1107:
1.4 mspo 1108: # SCSI bus support (for both ahc and umass)
1109: scsibus* at scsi?
1.5 mspo 1110:
1.4 mspo 1111: # SCSI devices
1112: sd* at scsibus? target ? lun ? # SCSI disk drives
1113: cd* at scsibus? target ? lun ? # SCSI CD-ROM drives
1.1 mspo 1114:
1115:
1.28 gdt 1116: NetBSD as a domU in a VPS
1117: =========================
1118:
1119: The bulk of the HOWTO is about using NetBSD as a dom0 on your own
1120: hardware. This section explains how to deal with Xen in a domU as a
1121: virtual private server where you do not control or have access to the
1122: dom0.
1123:
1124: TODO: Perhaps reference panix, prmgr, amazon as interesting examples.
1125:
1126: TODO: Somewhere, discuss pvgrub and py-grub to load the domU kernel
1127: from the domU filesystem.
1.44 gdt 1128:
1129: Using npf
1130: ---------
1131:
1132: In standard kernels, npf is a module, and thus cannot be loadeed in a
1133: DOMU kernel.
1134:
1135: TODO: explain how to compile npf into a custom kernel, answering:
1136: http://mail-index.netbsd.org/netbsd-users/2014/12/26/msg015576.html
CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb