version 1.51, 2014/12/26 23:36:34
|
version 1.52, 2014/12/26 23:46:22
|
Line 750 See possibly outdated
|
Line 750 See possibly outdated
|
[Solaris domU instructions](/ports/xen/howto-solaris/). |
[Solaris domU instructions](/ports/xen/howto-solaris/). |
|
|
|
|
Using PCI devices in guest domains |
PCI passthrough: Using PCI devices in guest domains |
---------------------------------- |
--------------------------------------------------- |
|
|
The domain0 can give other domains access to selected PCI devices. This |
The domain0 can give other domains access to selected PCI |
can allow, for example, a non-privileged domain to have access to a |
devices. This can allow, for example, a non-privileged domain to have |
physical network interface or disk controller. However, keep in mind |
access to a physical network interface or disk controller. However, |
that giving a domain access to a PCI device most likely will give the |
keep in mind that giving a domain access to a PCI device most likely |
domain read/write access to the whole physical memory, as PCs don't have |
will give the domain read/write access to the whole physical memory, |
an IOMMU to restrict memory access to DMA-capable device. Also, it's not |
as PCs don't have an IOMMU to restrict memory access to DMA-capable |
possible to export ISA devices to non-domain0 domains (which means that |
device. Also, it's not possible to export ISA devices to non-domain0 |
the primary VGA adapter can't be exported. A guest domain trying to |
domains, which means that the primary VGA adapter can't be exported. |
access the VGA registers will panic). |
A guest domain trying to access the VGA registers will panic. |
|
|
This functionality is only available in NetBSD-5.1 (and later) domain0 |
If the domain0 is NetBSD, it has to be running Xen 3.1, as support has |
and domU. If the domain0 is NetBSD, it has to be running Xen 3.1, as |
not been ported to later versions at this time. |
support has not been ported to later versions at this time. |
|
|
For a PCI device to be exported to a domU, is has to be attached to |
For a PCI device to be exported to a domU, is has to be attached to the |
the "pciback" driver in dom0. Devices passed to the dom0 via the |
`pciback` driver in domain0. Devices passed to the domain0 via the |
pciback.hide boot parameter will attach to "pciback" instead of the |
pciback.hide boot parameter will attach to `pciback` instead of the |
usual driver. The list of devices is specified as "(bus:dev.func)", |
usual driver. The list of devices is specified as `(bus:dev.func)`, |
|
where bus and dev are 2-digit hexadecimal numbers, and func a |
where bus and dev are 2-digit hexadecimal numbers, and func a |
single-digit number: |
single-digit number: |
|
|
pciback.hide=(00:0a.0)(00:06.0) |
pciback.hide=(00:0a.0)(00:06.0) |
|
|
pciback devices should show up in the domain0's boot messages, and the |
pciback devices should show up in the dom0's boot messages, and the |
devices should be listed in the `/kern/xen/pci` directory. |
devices should be listed in the `/kern/xen/pci` directory. |
|
|
PCI devices to be exported to a domU are listed in the `pci` array of |
PCI devices to be exported to a domU are listed in the "pci" array of |
the domU's config file, with the format `'0000:bus:dev.func'` |
the domU's config file, with the format "0000:bus:dev.func". |
|
|
pci = [ '0000:00:06.0', '0000:00:0a.0' ] |
pci = [ '0000:00:06.0', '0000:00:0a.0' ] |
|
|
In the domU an `xpci` device will show up, to which one or more pci |
In the domU an "xpci" device will show up, to which one or more pci |
busses will attach. Then the PCI drivers will attach to PCI busses as |
busses will attach. Then the PCI drivers will attach to PCI busses as |
usual. Note that the default NetBSD DOMU kernels do not have `xpci` or |
usual. Note that the default NetBSD DOMU kernels do not have "xpci" |
any PCI drivers built in by default; you have to build your own kernel |
or any PCI drivers built in by default; you have to build your own |
to use PCI devices in a domU. Here's a kernel config example: |
kernel to use PCI devices in a domU. Here's a kernel config example; |
|
note that only the "xpci" lines are unusual. |
include "arch/i386/conf/XEN3_DOMU" |
|
#include "arch/i386/conf/XENU" # in NetBSD 3.0 |
include "arch/i386/conf/XEN3_DOMU" |
|
|
# Add support for PCI busses to the XEN3_DOMU kernel |
# Add support for PCI busses to the XEN3_DOMU kernel |
xpci* at xenbus ? |
xpci* at xenbus ? |
pci* at xpci ? |
pci* at xpci ? |
|
|
# Now add PCI and related devices to be used by this domain |
# PCI USB controllers |
# USB Controller and Devices |
uhci* at pci? dev ? function ? # Universal Host Controller (Intel) |
|
|
# PCI USB controllers |
# USB bus support |
uhci* at pci? dev ? function ? # Universal Host Controller (Intel) |
usb* at uhci? |
|
|
# USB bus support |
# USB Hubs |
usb* at uhci? |
uhub* at usb? |
|
uhub* at uhub? port ? configuration ? interface ? |
# USB Hubs |
|
uhub* at usb? |
# USB Mass Storage |
uhub* at uhub? port ? configuration ? interface ? |
umass* at uhub? port ? configuration ? interface ? |
|
wd* at umass? |
# USB Mass Storage |
# SCSI controllers |
umass* at uhub? port ? configuration ? interface ? |
ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI |
wd* at umass? |
|
# SCSI controllers |
# SCSI bus support (for both ahc and umass) |
ahc* at pci? dev ? function ? # Adaptec [23]94x, aic78x0 SCSI |
scsibus* at scsi? |
|
|
# SCSI bus support (for both ahc and umass) |
# SCSI devices |
scsibus* at scsi? |
sd* at scsibus? target ? lun ? # SCSI disk drives |
|
cd* at scsibus? target ? lun ? # SCSI CD-ROM drives |
# SCSI devices |
|
sd* at scsibus? target ? lun ? # SCSI disk drives |
|
cd* at scsibus? target ? lun ? # SCSI CD-ROM drives |
|
|
|
|
|
NetBSD as a domU in a VPS |
NetBSD as a domU in a VPS |
Line 832 hardware. This section explains how to
|
Line 828 hardware. This section explains how to
|
virtual private server where you do not control or have access to the |
virtual private server where you do not control or have access to the |
dom0. |
dom0. |
|
|
TODO: Perhaps reference panix, prmgr, amazon as interesting examples. |
VPS operators provide varying degrees of access and mechanisms for |
|
configuration. The big issue is usually how one controls which kernel |
|
is booted, because the kernel is nominally in the dom0 filesystem (to |
|
which VPS users do not normally have acesss). |
|
|
|
A VPS user may want to compile a kernel for security updates, to run |
|
npf, run IPsec, or any other reason why someone would want to change |
|
their kernel. |
|
|
|
One approach is to have an adminstrative interface to upload a kernel, |
|
or to select from a prepopulated list. |
|
|
|
Otehr approaches are pvgrub and py-grub, which are ways to start a |
|
bootloader from the dom0 instead of the actual domU kernel, and for |
|
that loader to then load a kernel from the domU filesystem. This is |
|
closer to a regular physical computer, where someone who controls a |
|
machine can replace the kernel. |
|
|
|
prmgr and pvgrub |
|
---------------- |
|
|
TODO: Somewhere, discuss pvgrub and py-grub to load the domU kernel |
TODO: Perhaps reference panix, prmgr, amazon as interesting examples. |
from the domU filesystem. |
Explain what prmgr does. |
|
|
Using npf |
Using npf |
--------- |
--------- |