Diff for /wikisrc/guide/raidframe.mdwn between versions 1.5 and 1.6

version 1.5, 2013/03/02 13:16:08 version 1.6, 2013/03/02 13:20:56
Line 4 Line 4
   
 ### About RAIDframe  ### About RAIDframe
   
 NetBSD uses the [CMU RAIDframe](http://www.pdl.cmu.edu/RAIDframe/) software fo  NetBSD uses the [CMU RAIDframe](http://www.pdl.cmu.edu/RAIDframe/) software for
 its RAID subsystem. NetBSD is the primary platform for RAIDframe development  its RAID subsystem. NetBSD is the primary platform for RAIDframe development.
 RAIDframe can also be found in older versions of FreeBSD and OpenBSD. NetBS  RAIDframe can also be found in older versions of FreeBSD and OpenBSD. NetBSD
 also has another way of bundling disks, th  also has another way of bundling disks, the
 [ccd(4)](http://netbsd.gw.com/cgi-bin/man-cgi?ccd+4+NetBSD-5.0.1+i386) subsyste  [ccd(4)](http://netbsd.gw.com/cgi-bin/man-cgi?ccd+4+NetBSD-5.0.1+i386) subsystem
 (see [Concatenated Disk Device](/guide/ccd)). You should possess some [basi  (see [Concatenated Disk Device](/guide/ccd)). You should possess some [basic
 knowledge](http://www.acnc.com/04_00.html) about RAID concepts and terminolog  knowledge](http://www.acnc.com/04_00.html) about RAID concepts and terminology
 before continuing. You should also be at least familiar with the differen  before continuing. You should also be at least familiar with the different
 levels of RAID - Adaptec provides an [excellen  levels of RAID - Adaptec provides an [excellent
 reference](http://www.adaptec.com/en-US/_common/compatibility/_education/RAID_level_compar_wp.htm)  reference](http://www.adaptec.com/en-US/_common/compatibility/_education/RAID_level_compar_wp.htm),
 and the [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386  and the [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386)
 manpage contains a short overview too.  manpage contains a short overview too.
   
 ### A warning about Data Integrity, Backups, and High Availability  ### A warning about Data Integrity, Backups, and High Availability
   
 RAIDframe is a Software RAID implementation, as opposed to Hardware RAID. A  RAIDframe is a Software RAID implementation, as opposed to Hardware RAID. As
 such, it does not need special disk controllers supported by NetBSD. Syste  such, it does not need special disk controllers supported by NetBSD. System
 administrators should give a great deal of consideration to whether softwar  administrators should give a great deal of consideration to whether software
 RAID or hardware RAID is more appropriate for their "Mission Critical  RAID or hardware RAID is more appropriate for their "Mission Critical"
 applications. For some projects you might consider the use of many of th  applications. For some projects you might consider the use of many of the
 hardware RAID devices [supported b  hardware RAID devices [supported by
 NetBSD](http://www.NetBSD.org/support/hardware/). It is truly at your discretio  NetBSD](http://www.NetBSD.org/support/hardware/). It is truly at your discretion
 what type of RAID you use, but it is recommend that you consider factors suc  what type of RAID you use, but it is recommend that you consider factors such
 as: manageability, commercial vendor support, load-balancing and failover, etc.  as: manageability, commercial vendor support, load-balancing and failover, etc.
   
 Depending on the RAID level used, RAIDframe does provide redundancy in the even  Depending on the RAID level used, RAIDframe does provide redundancy in the event
 of a hardware failure. However, it is *not* a replacement for reliable backups  of a hardware failure. However, it is *not* a replacement for reliable backups!
 Software and user-error can still cause data loss. RAIDframe may be used as   Software and user-error can still cause data loss. RAIDframe may be used as a
 mechanism for facilitating backups in systems without backup hardware, but thi  mechanism for facilitating backups in systems without backup hardware, but this
 is not an ideal configuration. Finally, with regard to "high availability", RAI  is not an ideal configuration. Finally, with regard to "high availability", RAID
 is only a very small component to ensuring data availability.  is only a very small component to ensuring data availability.
   
 Once more for good measure: *Back up your data!*  Once more for good measure: *Back up your data!*
   
 ### Hardware versus Software RAID  ### Hardware versus Software RAID
   
 If you run a server, it will most probably already have a Hardware RAI  If you run a server, it will most probably already have a Hardware RAID
 controller. There are reasons for and against using a Software RAID, dependin  controller. There are reasons for and against using a Software RAID, depending
 on the scenario.  on the scenario.
   
 In general, a Software RAID is well suited for low-IO system disks. If you run   In general, a Software RAID is well suited for low-IO system disks. If you run a
 Software RAID, you can exchange disks and disk controllers, or even move th  Software RAID, you can exchange disks and disk controllers, or even move the
 disks to a completely different machine. The computational overhead for the RAI  disks to a completely different machine. The computational overhead for the RAID
 is negligible if there is only few disk IO operations.  is negligible if there is only few disk IO operations.
   
 If you need much IO, you should use a Hardware RAID. With a Software RAID, th  If you need much IO, you should use a Hardware RAID. With a Software RAID, the
 redundancy data has to be transferred via the bus your disk controller i  redundancy data has to be transferred via the bus your disk controller is
 connected to. With a Hardware RAID, you transfer data only once - the redundanc  connected to. With a Hardware RAID, you transfer data only once - the redundancy
 computation and transfer is done by the controller.  computation and transfer is done by the controller.
   
 ### Getting Help  ### Getting Help
   
 If you encounter problems using RAIDframe, you have several options fo  If you encounter problems using RAIDframe, you have several options for
 obtaining help.  obtaining help.
   
  1. Read the RAIDframe man pages   1. Read the RAIDframe man pages:
     [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386) an      [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386) and
     [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386      [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386)
     thoroughly.      thoroughly.
   
  2. Search the mailing list archives. Unfortunately, there is no NetBSD lis   2. Search the mailing list archives. Unfortunately, there is no NetBSD list
     dedicated to RAIDframe support. Depending on the nature of the problem, posts      dedicated to RAIDframe support. Depending on the nature of the problem, posts
     tend to end up in a variety of lists. At a very minimum, search      tend to end up in a variety of lists. At a very minimum, search
     [netbsd-help](http://mail-index.NetBSD.org/netbsd-help/),      [netbsd-help](http://mail-index.NetBSD.org/netbsd-help/),
Line 75  obtaining help. Line 75  obtaining help.
   
     ### Caution      ### Caution
   
         Because RAIDframe is constantly undergoing development, some information i          Because RAIDframe is constantly undergoing development, some information in
         mailing list archives has the potential of being dated and inaccurate.          mailing list archives has the potential of being dated and inaccurate.
   
  3. Search the [Problem Repor   3. Search the [Problem Report
     database](http://www.NetBSD.org/support/send-pr.html).      database](http://www.NetBSD.org/support/send-pr.html).
   
  4. If your problem persists: Post to the mailing list most appropriat   4. If your problem persists: Post to the mailing list most appropriate
     (judgment call). Collect as much verbosely detailed information as possibl      (judgment call). Collect as much verbosely detailed information as possible
     before posting: Include you      before posting: Include your
     [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386      [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386)
     output from `/var/run/dmesg.boot`, your kerne      output from `/var/run/dmesg.boot`, your kernel
     [config(5)](http://netbsd.gw.com/cgi-bin/man-cgi?config+5+NetBSD-5.0.1+i386)       [config(5)](http://netbsd.gw.com/cgi-bin/man-cgi?config+5+NetBSD-5.0.1+i386) ,
     your `/etc/raid[0-9].conf`, any relevant errors on `/dev/console`      your `/etc/raid[0-9].conf`, any relevant errors on `/dev/console`,
     `/var/log/messages`, or to `stdout/stderr` o      `/var/log/messages`, or to `stdout/stderr` of
     [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386)      [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386).
     The output of **raidctl -s** (if available) will be useful as well. Als      The output of **raidctl -s** (if available) will be useful as well. Also
     include details on the troubleshooting steps you've taken thus far, exactl      include details on the troubleshooting steps you've taken thus far, exactly
     when the problem started, and any notes on recent changes that may hav      when the problem started, and any notes on recent changes that may have
     prompted the problem to develop. Remember to be patient when waiting for       prompted the problem to develop. Remember to be patient when waiting for a
     response.      response.
   
 ## Setup RAIDframe Support  ## Setup RAIDframe Support
Line 102  The use of RAID will require software an Line 102  The use of RAID will require software an
   
 ### Kernel Support  ### Kernel Support
   
 The GENERIC kernel already has support for RAIDframe. If you have built a custo  The GENERIC kernel already has support for RAIDframe. If you have built a custom
 kernel for your environment the kernel configuration must have the followin  kernel for your environment the kernel configuration must have the following
 options:  options:
   
     pseudo-device   raid            8       # RAIDframe disk driver      pseudo-device   raid            8       # RAIDframe disk driver
     options         RAID_AUTOCONFIG         # auto-configuration of RAID components      options         RAID_AUTOCONFIG         # auto-configuration of RAID components
   
 The RAID support must be detected by the NetBSD kernel, which can be checked b  The RAID support must be detected by the NetBSD kernel, which can be checked by
 looking at the output of th  looking at the output of the
 [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386  [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386)
 command.  command.
   
     # dmesg|grep -i raid      # dmesg|grep -i raid
     Kernelized RAIDframe activated      Kernelized RAIDframe activated
   
 Historically, the kernel must also contain static mappings between bus addresse  Historically, the kernel must also contain static mappings between bus addresses
 and device nodes in `/dev`. This used to ensure consistency of devices withi  and device nodes in `/dev`. This used to ensure consistency of devices within
 RAID sets in the event of a device failure after reboot. Since NetBSD 1.6  RAID sets in the event of a device failure after reboot. Since NetBSD 1.6,
 however, using the auto-configuration features of RAIDframe has been recommende  however, using the auto-configuration features of RAIDframe has been recommended
 over statically mapping devices. The auto-configuration features allow drives t  over statically mapping devices. The auto-configuration features allow drives to
 move around on the system, and RAIDframe will automatically determine whic  move around on the system, and RAIDframe will automatically determine which
 components belong to which RAID sets.  components belong to which RAID sets.
   
 ### Power Redundancy and Disk Caching  ### Power Redundancy and Disk Caching
   
 If your system has an Uninterruptible Power Supply (UPS), if your system ha  If your system has an Uninterruptible Power Supply (UPS), if your system has
 redundant power supplies, or your disk controller has a battery, you shoul  redundant power supplies, or your disk controller has a battery, you should
 consider enabling the read and write caches on your drives. On systems wit  consider enabling the read and write caches on your drives. On systems with
 redundant power, this will improve drive performance. On systems withou  redundant power, this will improve drive performance. On systems without
 redundant power, the write cache could endanger the integrity of RAID data i  redundant power, the write cache could endanger the integrity of RAID data in
 the event of a power loss.  the event of a power loss.
   
 The [dkctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dkctl+8+NetBSD-5.0.1+i386  The [dkctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dkctl+8+NetBSD-5.0.1+i386)
 utility to can be used for this on all kinds of disks that support the operatio  utility to can be used for this on all kinds of disks that support the operation
 (SCSI, EIDE, SATA, ...):  (SCSI, EIDE, SATA, ...):
   
     # dkctl wd0 getcache      # dkctl wd0 getcache
Line 153  utility to can be used for this on all k Line 153  utility to can be used for this on all k
   
 ## Example: RAID-1 Root Disk  ## Example: RAID-1 Root Disk
   
 This example explains how to setup RAID-1 root disk. With RAID-1 components ar  This example explains how to setup RAID-1 root disk. With RAID-1 components are
 mirrored and therefore the server can be fully functional in the event of   mirrored and therefore the server can be fully functional in the event of a
 single component failure. The goal is to provide a level of redundancy that wil  single component failure. The goal is to provide a level of redundancy that will
 allow the system to encounter a component failure on either component disk i  allow the system to encounter a component failure on either component disk in
 the RAID and:  the RAID and:
   
  * Continue normal operations until a maintenance window can be scheduled.   * Continue normal operations until a maintenance window can be scheduled.
  * Or, in the unlikely event that the component failure causes a system reboot   * Or, in the unlikely event that the component failure causes a system reboot,
    be able to quickly reconfigure the system to boot from the remaining     be able to quickly reconfigure the system to boot from the remaining
    component (platform dependent).     component (platform dependent).
   
 ![RAID-1 Disk Logical Layout](/guide/images/raidframe_raidl1-diskdia.png)  ![RAID-1 Disk Logical Layout](/guide/images/raidframe_raidL1-diskdia.png)
   
 **RAID-1 Disk Logical Layout**  **RAID-1 Disk Logical Layout**
   
 Because RAID-1 provides both redundancy and performance improvements, its mos  Because RAID-1 provides both redundancy and performance improvements, its most
 practical application is on critical "system" partitions such as `/`, `/usr`  practical application is on critical "system" partitions such as `/`, `/usr`,
 `/var`, `swap`, etc., where read operations are more frequent than writ  `/var`, `swap`, etc., where read operations are more frequent than write
 operations. For other file systems, such as `/home` or `/var/`, other RAI  operations. For other file systems, such as `/home` or `/var/`, other RAID
 levels might be considered (see the references above). If one were simpl  levels might be considered (see the references above). If one were simply
 creating a generic RAID-1 volume for a non-root file system, the cookie-cutte  creating a generic RAID-1 volume for a non-root file system, the cookie-cutter
 examples from the man page could be followed, but because the root volume mus  examples from the man page could be followed, but because the root volume must
 be bootable, certain special steps must be taken during initial setup.  be bootable, certain special steps must be taken during initial setup.
   
 *Note*: This example will outline a process that differs only slightly betwee  *Note*: This example will outline a process that differs only slightly between
 the i386 and sparc64 platforms. In an attempt to reduce excessive duplication o  the i386 and sparc64 platforms. In an attempt to reduce excessive duplication of
 content, where differences do exist and are cosmetic in nature, they will b  content, where differences do exist and are cosmetic in nature, they will be
 pointed out using a section such as this. If the process is drasticall  pointed out using a section such as this. If the process is drastically
 different, the process will branch into separate, platform dependent steps.  different, the process will branch into separate, platform dependent steps.
   
 ### Pseudo-Process Outline  ### Pseudo-Process Outline
   
 Although a much more refined process could be developed using a custom copy o  Although a much more refined process could be developed using a custom copy of
 NetBSD installed on custom-developed removable media, presently the NetBS  NetBSD installed on custom-developed removable media, presently the NetBSD
 install media lacks RAIDframe tools and support, so the following pseudo process  install media lacks RAIDframe tools and support, so the following pseudo process
 has become the de facto standard for setting up RAID-1 Root.  has become the de facto standard for setting up RAID-1 Root.
   
  1. Install a stock NetBSD onto Disk0 of your system.   1. Install a stock NetBSD onto Disk0 of your system.
   
     ![Perform generic install onto Disk0/wd0](/guide/images/raidframe_r1r-pp1.png)  
   
       ![Perform generic install onto Disk0/wd0](/guide/images/raidframe_r1r-pp1.png)
     **Perform generic install onto Disk0/wd0**      **Perform generic install onto Disk0/wd0**
   
  2. Use the installed system on Disk0/wd0 to setup a RAID Set composed o   2. Use the installed system on Disk0/wd0 to setup a RAID Set composed of
     Disk1/wd1 only.      Disk1/wd1 only.
   
     ![Setup RAID Set](raidframe_r1r-pp2.png)      ![Setup RAID Set](raidframe_r1r-pp2.png)
Line 204  has become the de facto standard for set Line 203  has become the de facto standard for set
   
  3. Reboot the system off the Disk1/wd1 with the newly created RAID volume.   3. Reboot the system off the Disk1/wd1 with the newly created RAID volume.
   
     ![Reboot using Disk1/wd1 of RAID](/guide/images/raidframe_r1r-pp3.png)  
   
       ![Reboot using Disk1/wd1 of RAID](/guide/images/raidframe_r1r-pp3.png)
     **Reboot using Disk1/wd1 of RAID**      **Reboot using Disk1/wd1 of RAID**
   
   
  4. Add / re-sync Disk0/wd0 back into the RAID set.   4. Add / re-sync Disk0/wd0 back into the RAID set.
   
     ![Mirror Disk1/wd1 back to Disk0/wd0](/guide/images/raidframe_r1r-pp4.png)      ![Mirror Disk1/wd1 back to Disk0/wd0](/guide/images/raidframe_r1r-pp4.png)
   
     **Mirror Disk1/wd1 back to Disk0/wd0**      **Mirror Disk1/wd1 back to Disk0/wd0**
   
 ### Hardware Review  ### Hardware Review
   
 At present, the alpha, amd64, i386, pmax, sparc, sparc64, and vax NetBS  At present, the alpha, amd64, i386, pmax, sparc, sparc64, and vax NetBSD
 platforms support booting from RAID-1. Booting is not supported from any othe  platforms support booting from RAID-1. Booting is not supported from any other
 RAID level. Booting from a RAID set is accomplished by teaching the 1st stag  RAID level. Booting from a RAID set is accomplished by teaching the 1st stage
 boot loader to understand both 4.2BSD/FFS and RAID partitions. The 1st boo  boot loader to understand both 4.2BSD/FFS and RAID partitions. The 1st boot
 block code only needs to know enough about the disk partitions and file system  block code only needs to know enough about the disk partitions and file systems
 to be able to read the 2nd stage boot blocks. Therefore, at any time, th  to be able to read the 2nd stage boot blocks. Therefore, at any time, the
 system's BIOS / firmware must be able to read a drive with 1st stage boot block  system's BIOS / firmware must be able to read a drive with 1st stage boot blocks
 installed. On the i386 platform, configuring this is entirely dependent on th  installed. On the i386 platform, configuring this is entirely dependent on the
 vendor of the controller card / host bus adapter to which your disks ar  vendor of the controller card / host bus adapter to which your disks are
 connected. On sparc64 this is controlled by the IEEE 1275 Sun OpenBoot Firmware.  connected. On sparc64 this is controlled by the IEEE 1275 Sun OpenBoot Firmware.
   
 This article assumes two identical IDE disks (`/dev/wd{0,1}`) which we are goin  This article assumes two identical IDE disks (`/dev/wd{0,1}`) which we are going
 to mirror (RAID-1). These disks are identified as:  to mirror (RAID-1). These disks are identified as:
   
     # grep ^wd /var/run/dmesg.boot      # grep ^wd /var/run/dmesg.boot
Line 243  to mirror (RAID-1). These disks are iden Line 242  to mirror (RAID-1). These disks are iden
     wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)      wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
     wd1(piixide0:1:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA data transfers)      wd1(piixide0:1:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA data transfers)
   
 *Note*: If you are using SCSI, replace `/dev/[r]wd{0,1}` with `/dev/[r]sd{0,1}`.  *Note*: If you are using SCSI, replace `/dev/{,r}wd{0,1}` with
   `/dev/{,r}sd{0,1}`.
   
 In this example, both disks are jumpered as Master on separate channels on th  In this example, both disks are jumpered as Master on separate channels on the
 same controller. You usually wouldn't want to have both disks on the same bus o  same controller. You usually wouldn't want to have both disks on the same bus on
 the same controller; this creates a single point of failure. Ideally you woul  the same controller; this creates a single point of failure. Ideally you would
 have the disks on separate channels on separate controllers. Nonetheless, i  have the disks on separate channels on separate controllers. Nonetheless, in
 most cases the most critical point is the hard disk, so having redundan  most cases the most critical point is the hard disk, so having redundant
 channels or controllers is not that important. Plus, having more channels o  channels or controllers is not that important. Plus, having more channels or
 controllers increases costs. Some SCSI controllers have multiple channels on th  controllers increases costs. Some SCSI controllers have multiple channels on the
 same controller, however, a SCSI bus reset on one channel could adversely affec  same controller, however, a SCSI bus reset on one channel could adversely affect
 the other channel if the ASIC/IC becomes overloaded. The trade-off with tw  the other channel if the ASIC/IC becomes overloaded. The trade-off with two
 controllers is that twice the bandwidth is used on the system bus. For purpose  controllers is that twice the bandwidth is used on the system bus. For purposes
 of simplification, this example shows two disks on different channels on th  of simplification, this example shows two disks on different channels on the
 same controller.  same controller.
   
 *Note*: RAIDframe requires that all components be of the same size. Actually, i  *Note*: RAIDframe requires that all components be of the same size. Actually, it
 will use the lowest common denominator among components of dissimilar sizes. Fo  will use the lowest common denominator among components of dissimilar sizes. For
 purposes of illustration, the example uses two disks of identical geometries  purposes of illustration, the example uses two disks of identical geometries.
 Also, consider the availability of replacement disks if a component suffers   Also, consider the availability of replacement disks if a component suffers a
 critical hardware failure.  critical hardware failure.
   
 *Tip*: Two disks of identical vendor model numbers could have differen  *Tip*: Two disks of identical vendor model numbers could have different
 geometries if the drive possesses "grown defects". Use a low-level program t  geometries if the drive possesses "grown defects". Use a low-level program to
 examine the grown defects table of the disk. These disks are obviousl  examine the grown defects table of the disk. These disks are obviously
 suboptimal candidates for use in RAID and should be avoided.  suboptimal candidates for use in RAID and should be avoided.
   
 ### Initial Install on Disk0/wd0  ### Initial Install on Disk0/wd0
   
 Perform a very generic installation onto your Disk0/wd0. Follow the `INSTALL  Perform a very generic installation onto your Disk0/wd0. Follow the `INSTALL`
 instructions for your platform. Install all the sets but do not bothe  instructions for your platform. Install all the sets but do not bother
 customizing anything other than the kernel as it will be overwritten.  customizing anything other than the kernel as it will be overwritten.
   
 *Tip*: On i386, during the sysinst install, when prompted if you want to `us  *Tip*: On i386, during the sysinst install, when prompted if you want to `use
 the entire disk for NetBSD`, answer `yes`.  the entire disk for NetBSD`, answer `yes`.
   
  * [Installing NetBSD: Preliminary considerations and preparations](/guide/inst)   * [Installing NetBSD: Preliminary considerations and preparations](/guide/inst)
  * [NetBSD/i386 Install Directions](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/i386/INSTALL.html)   * [NetBSD/i386 Install](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/i386/INSTALL.html)
  * [NetBSD/sparc64 Install Directions](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/sparc64/INSTALL.html)   * [NetBSD/sparc64 Install](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/sparc64/INSTALL.html)
   
 Once the installation is complete, you should examine th  Once the installation is complete, you should examine the
 [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386  [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)
 and [fdisk(8)](http://netbsd.gw.com/cgi-bin/man-cgi?fdisk+8+NetBSD-5.0.1+i386)   and [fdisk(8)](http://netbsd.gw.com/cgi-bin/man-cgi?fdisk+8+NetBSD-5.0.1+i386) /
 [sunlabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?sunlabel+8+NetBSD-5.0.1+i386  [sunlabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?sunlabel+8+NetBSD-5.0.1+i386)
 outputs on the system:  outputs on the system:
   
     # df      # df
Line 359  On Sparc64 the command and output differ Line 359  On Sparc64 the command and output differ
   
 ### Preparing Disk1/wd1  ### Preparing Disk1/wd1
   
 Once you have a stock install of NetBSD on Disk0/wd0, you are ready to begin  Once you have a stock install of NetBSD on Disk0/wd0, you are ready to begin.
 Disk1/wd1 will be visible and unused by the system. To setup Disk1/wd1, you wil  Disk1/wd1 will be visible and unused by the system. To setup Disk1/wd1, you will
 us  use
 [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386  [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)
 to allocate the entire second disk to the RAID-1 set.  to allocate the entire second disk to the RAID-1 set.
   
 *Tip*: The best way to ensure that Disk1/wd1 is completely empty is to 'zero  *Tip*: The best way to ensure that Disk1/wd1 is completely empty is to 'zero'
 out the first few sectors of the disk wit  out the first few sectors of the disk with
 [dd(1)](http://netbsd.gw.com/cgi-bin/man-cgi?dd+1+NetBSD-5.0.1+i386) . This wil  [dd(1)](http://netbsd.gw.com/cgi-bin/man-cgi?dd+1+NetBSD-5.0.1+i386) . This will
 erase the MBR (i386) or Sun disk label (sparc64), as well as the NetBSD dis  erase the MBR (i386) or Sun disk label (sparc64), as well as the NetBSD disk
 label. If you make a mistake at any point during the RAID setup process, you ca  label. If you make a mistake at any point during the RAID setup process, you can
 always refer to this process to restore the disk to an empty state.  always refer to this process to restore the disk to an empty state.
   
 *Note*: On sparc64, use `/dev/rwd1c` instead of `/dev/rwd1d`!  *Note*: On sparc64, use `/dev/rwd1c` instead of `/dev/rwd1d`!
Line 379  always refer to this process to restore  Line 379  always refer to this process to restore 
     1+0 records out      1+0 records out
     8192 bytes transferred in 0.003 secs (2730666 bytes/sec)      8192 bytes transferred in 0.003 secs (2730666 bytes/sec)
   
 Once this is complete, on i386, verify that both the MBR and NetBSD disk label  Once this is complete, on i386, verify that both the MBR and NetBSD disk labels
 are gone. On sparc64, verify that the Sun Disk label is gone as well.  are gone. On sparc64, verify that the Sun Disk label is gone as well.
   
 On i386:  On i386:
Line 426  On sparc64: Line 426  On sparc64:
     disklabel: boot block size 0      disklabel: boot block size 0
     disklabel: super block size 0      disklabel: super block size 0
   
 Now that you are certain the second disk is empty, on i386 you must establis  Now that you are certain the second disk is empty, on i386 you must establish
 the MBR on the second disk using the values obtained from Disk0/wd0 above. W  the MBR on the second disk using the values obtained from Disk0/wd0 above. We
 must remember to mark the NetBSD partition active or the system will not boot  must remember to mark the NetBSD partition active or the system will not boot.
 You must also create a NetBSD disklabel on Disk1/wd1 that will enable a RAI  You must also create a NetBSD disklabel on Disk1/wd1 that will enable a RAID
 volume to exist upon it. On sparc64, you will need to simpl  volume to exist upon it. On sparc64, you will need to simply
 [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386  [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)
 the second disk which will write the proper Sun Disk Label.  the second disk which will write the proper Sun Disk Label.
   
 *Tip*  *Tip*:
 [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386  [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)
 will use your shell' s environment variable `$EDITOR` variable to edit th  will use your shell' s environment variable `$EDITOR` variable to edit the
 disklabel. The default i  disklabel. The default is
 [vi(1)](http://netbsd.gw.com/cgi-bin/man-cgi?vi+1+NetBSD-5.0.1+i386)  [vi(1)](http://netbsd.gw.com/cgi-bin/man-cgi?vi+1+NetBSD-5.0.1+i386)
   
 On i386:  On i386:
Line 515  On sparc64: Line 515  On sparc64:
      a:  19541088         0       RAID                     # (Cyl.      0 -  19385)       a:  19541088         0       RAID                     # (Cyl.      0 -  19385)
      c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)       c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)
           
     # sunlabel /dev/rwd1      # sunlabel /dev/rwd1c
     sunlabel> P      sunlabel> P
     a: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)      a: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
     c: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)      c: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
   
 *Note*: On i386, the `c:` and `d:` slices are reserved. `c:` represents th  *Note*: On i386, the `c:` and `d:` slices are reserved. `c:` represents the
 NetBSD portion of the disk. `d:` represents the entire disk. Because we want t  NetBSD portion of the disk. `d:` represents the entire disk. Because we want to
 allocate the entire NetBSD MBR partition to RAID, and because `a:` reside  allocate the entire NetBSD MBR partition to RAID, and because `a:` resides
 within the bounds of `c:`, the `a:` and `c:` slices have same size and offse  within the bounds of `c:`, the `a:` and `c:` slices have same size and offset
 values and sizes. The offset must start at a track boundary (an increment o  values and sizes. The offset must start at a track boundary (an increment of
 sectors matching the sectors/track value in the disk label). On sparc64 however  sectors matching the sectors/track value in the disk label). On sparc64 however,
 `c:` represents the entire NetBSD partition in the Sun disk label and `d:` i  `c:` represents the entire NetBSD partition in the Sun disk label and `d:` is
 not reserved. Also note that sparc64's `c:` and `a:` require no offset from th  not reserved. Also note that sparc64's `c:` and `a:` require no offset from the
 beginning of the disk, however if they should need to be, the offset must star  beginning of the disk, however if they should need to be, the offset must start
 at a cylinder boundary (an increment of sectors matching the sectors/cylinde  at a cylinder boundary (an increment of sectors matching the sectors/cylinder
 value).  value).
   
 ### Initializing the RAID Device  ### Initializing the RAID Device
   
 Next we create the configuration file for the RAID set / volume. Traditionally  Next we create the configuration file for the RAID set / volume. Traditionally,
 RAIDframe configuration files belong in `/etc` and would be read and initialize  RAIDframe configuration files belong in `/etc` and would be read and initialized
 at boot time, however, because we are creating a bootable RAID volume, th  at boot time, however, because we are creating a bootable RAID volume, the
 configuration data will actually be written into the RAID volume using th  configuration data will actually be written into the RAID volume using the
 *auto-configure* feature. Therefore, files are needed only during the initia  *auto-configure* feature. Therefore, files are needed only during the initial
 setup and should not reside in `/etc`.  setup and should not reside in `/etc`.
   
     # vi /var/tmp/raid0.conf      # vi /var/tmp/raid0.conf
Line 555  setup and should not reside in `/etc`. Line 555  setup and should not reside in `/etc`.
     START queue      START queue
     fifo 100      fifo 100
   
 Note that `absent` means a non-existing disk. This will allow us to establis  Note that `absent` means a non-existing disk. This will allow us to establish
 the RAID volume with a bogus component that we will substitute for Disk0/wd0 a  the RAID volume with a bogus component that we will substitute for Disk0/wd0 at
 a later time.  a later time.
   
 Next we configure the RAID device and initialize the serial number to somethin  Next we configure the RAID device and initialize the serial number to something
 unique. In this example we use a "YYYYMMDD*`Revision`*" scheme. The format yo  unique. In this example we use a "YYYYMMDD*`Revision`*" scheme. The format you
 choose is entirely at your discretion, however the scheme you choose shoul  choose is entirely at your discretion, however the scheme you choose should
 ensure that no two RAID sets use the same serial number at the same time.  ensure that no two RAID sets use the same serial number at the same time.
   
 After that we initialize the RAID set for the first time, safely ignoring th  After that we initialize the RAID set for the first time, safely ignoring the
 errors regarding the bogus component.  errors regarding the bogus component.
   
     # raidctl -v -C /var/tmp/raid0.conf raid0      # raidctl -v -C /var/tmp/raid0.conf raid0
Line 618  errors regarding the bogus component. Line 618  errors regarding the bogus component.
   
 ### Setting up Filesystems  ### Setting up Filesystems
   
 *Caution*: The root filesystem must begin at sector 0 of the RAID device. I  *Caution*: The root filesystem must begin at sector 0 of the RAID device. If
 not, the primary boot loader will be unable to find the secondary boot loader.  not, the primary boot loader will be unable to find the secondary boot loader.
   
 The RAID device is now configured and available. The RAID device is a pseud  The RAID device is now configured and available. The RAID device is a pseudo
 disk-device. It will be created with a default disk label. You must no  disk-device. It will be created with a default disk label. You must now
 determine the proper sizes for disklabel slices for your production environment  determine the proper sizes for disklabel slices for your production environment.
 For purposes of simplification in this example, our system will have 8.  For purposes of simplification in this example, our system will have 8.5
 gigabytes dedicated to `/` as `/dev/raid0a` and the rest allocated to `swap  gigabytes dedicated to `/` as `/dev/raid0a` and the rest allocated to `swap`
 as `/dev/raid0b`.  as `/dev/raid0b`.
   
 *Caution*: This is an unrealistic disk layout for a production server; th  *Caution*: This is an unrealistic disk layout for a production server; the
 NetBSD Guide can expand on proper partitioning technique. See [Installin  NetBSD Guide can expand on proper partitioning technique. See [Installing
 NetBSD: Preliminary considerations and preparations*](inst).  NetBSD: Preliminary considerations and preparations*](inst).
   
 *Note*: Note that 1 GB is 2\*1024\*1024=2097152 blocks (1 block is 512 bytes, o  *Note*: Note that 1 GB is 2\*1024\*1024=2097152 blocks (1 block is 512 bytes, or
 0.5 kilobytes). Despite what the underlying hardware composing a RAID set is  0.5 kilobytes). Despite what the underlying hardware composing a RAID set is,
 the RAID pseudo disk will always have 512 bytes/sector.  the RAID pseudo disk will always have 512 bytes/sector.
   
 *Note*: In our example, the space allocated to the underlying `a:` slic  *Note*: In our example, the space allocated to the underlying `a:` slice
 composing the RAID set differed between i386 and sparc64, therefore the tota  composing the RAID set differed between i386 and sparc64, therefore the total
 sectors of the RAID volumes differs:  sectors of the RAID volumes differs:
   
 On i386:  On i386:
Line 700  Next, format the newly created `/` parti Line 700  Next, format the newly created `/` parti
   
 ### Migrating System to RAID  ### Migrating System to RAID
   
 The new RAID filesystems are now ready for use. We mount them under `/mnt` an  The new RAID filesystems are now ready for use. We mount them under `/mnt` and
 copy all files from the old system. This can be done usin  copy all files from the old system. This can be done using
 [dump(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dump+8+NetBSD-5.0.1+i386) o  [dump(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dump+8+NetBSD-5.0.1+i386) or
 [pax(1)](http://netbsd.gw.com/cgi-bin/man-cgi?pax+1+NetBSD-5.0.1+i386).  [pax(1)](http://netbsd.gw.com/cgi-bin/man-cgi?pax+1+NetBSD-5.0.1+i386).
   
     # mount /dev/raid0a /mnt      # mount /dev/raid0a /mnt
Line 712  copy all files from the old system. This Line 712  copy all files from the old system. This
     # cd /; pax -v -X -rw -pe . /mnt      # cd /; pax -v -X -rw -pe . /mnt
     [...snip...]      [...snip...]
   
 The NetBSD install now exists on the RAID filesystem. We need to fix th  The NetBSD install now exists on the RAID filesystem. We need to fix the
 mount-points in the new copy of `/etc/fstab` or the system will not come u  mount-points in the new copy of `/etc/fstab` or the system will not come up
 correctly. Replace instances of `wd0` with `raid0`.  correctly. Replace instances of `wd0` with `raid0`.
   
 The swap should be unconfigured upon shutdown to avoid parity errors on the RAI  The swap should be unconfigured upon shutdown to avoid parity errors on the RAID
 device. This can be done with a simple, one-line setting in `/etc/rc.conf`.  device. This can be done with a simple, one-line setting in `/etc/rc.conf`.
   
     # vi /mnt/etc/rc.conf      # vi /mnt/etc/rc.conf
     swapoff=YES      swapoff=YES
   
 Next, the boot loader must be installed on Disk1/wd1. Failure to install th  Next, the boot loader must be installed on Disk1/wd1. Failure to install the
 loader on Disk1/wd1 will render the system un-bootable if Disk0/wd0 fails. You  loader on Disk1/wd1 will render the system un-bootable if Disk0/wd0 fails. You
 should hope your system won't have to reboot when wd0 fails.  should hope your system won't have to reboot when wd0 fails.
   
 *Tip*: Because the BIOS/CMOS menus in many i386 based systems are misleadin  *Tip*: Because the BIOS/CMOS menus in many i386 based systems are misleading
 with regard to device boot order. I highly recommend utilizing the `-  with regard to device boot order. I highly recommend utilizing the `-o
 timeout=X` option supported by the i386 1st stage boot loader. Setup uniqu  timeout=X` option supported by the i386 1st stage boot loader. Setup unique
 values for each disk as a point of reference so that you can easily determin  values for each disk as a point of reference so that you can easily determine
 from which disk the system is booting.  from which disk the system is booting.
   
 *Caution*: Although it may seem logical to install the 1st stage boot block int  *Caution*: Although it may seem logical to install the 1st stage boot block into
 `/dev/rwd1{c,d}` (which is historically correct with NetBSD 1.6.  `/dev/rwd1{c,d}` (which is historically correct with NetBSD 1.6.x
 [installboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?installboot+8+NetBSD-5.0.1+i386  [installboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?installboot+8+NetBSD-5.0.1+i386)
 , this is no longer the case. If you make this mistake, the boot sector wil  , this is no longer the case. If you make this mistake, the boot sector will
 become irrecoverably damaged and you will need to start the process over again.  become irrecoverably damaged and you will need to start the process over again.
   
 On i386, install the boot loader into `/dev/rwd1a`:  On i386, install the boot loader into `/dev/rwd1a`:
Line 746  On i386, install the boot loader into `/ Line 746  On i386, install the boot loader into `/
     Ignoring PBR with invalid magic in sector 0 of `/dev/rwd1a'      Ignoring PBR with invalid magic in sector 0 of `/dev/rwd1a'
     Boot options:        timeout 30, flags 0, speed 9600, ioaddr 0, console pc      Boot options:        timeout 30, flags 0, speed 9600, ioaddr 0, console pc
   
 On sparc64, install the boot loader into `/dev/rwd1a` as well, however the `-o  On sparc64, install the boot loader into `/dev/rwd1a` as well, however the `-o`
 flag is unsupported (and un-needed thanks to OpenBoot):  flag is unsupported (and un-needed thanks to OpenBoot):
   
     # /usr/sbin/installboot -v /dev/rwd1a /usr/mdec/bootblk      # /usr/sbin/installboot -v /dev/rwd1a /usr/mdec/bootblk
Line 756  flag is unsupported (and un-needed thank Line 756  flag is unsupported (and un-needed thank
     Bootstrap byte count:   5140      Bootstrap byte count:   5140
     Writing bootstrap      Writing bootstrap
   
 Finally the RAID set must be made auto-configurable and the system should b  Finally the RAID set must be made auto-configurable and the system should be
 rebooted. After the reboot everything is mounted from the RAID devices.  rebooted. After the reboot everything is mounted from the RAID devices.
   
     # raidctl -v -A root raid0      # raidctl -v -A root raid0
Line 775  rebooted. After the reboot everything is Line 775  rebooted. After the reboot everything is
   
 ### Warning  ### Warning
   
 Always us  Always use
 [shutdown(8)](http://netbsd.gw.com/cgi-bin/man-cgi?shutdown+8+NetBSD-5.0.1+i386  [shutdown(8)](http://netbsd.gw.com/cgi-bin/man-cgi?shutdown+8+NetBSD-5.0.1+i386)
 when shutting down. Never simply us  when shutting down. Never simply use
   [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386).
 [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386)  [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386)
 [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386  will not properly run shutdown RC scripts and will not safely disable swap. This
 will not properly run shutdown RC scripts and will not safely disable swap. Thi  
 will cause dirty parity at every reboot.  will cause dirty parity at every reboot.
   
 ### The first boot with RAID  ### The first boot with RAID
   
 At this point, temporarily configure your system to boot Disk1/wd1. See notes i  At this point, temporarily configure your system to boot Disk1/wd1. See notes in
 [[Testing Boot Blocks|guide/rf#adding-text-boot]] for details on this process  [[Testing Boot Blocks|guide/rf#adding-text-boot]] for details on this process.
 The system should boot now and all filesystems should be on the RAID devices  The system should boot now and all filesystems should be on the RAID devices.
 The RAID will be functional with a single component, however the set is no  The RAID will be functional with a single component, however the set is not
 fully functional because the bogus drive (wd9) has failed.  fully functional because the bogus drive (wd9) has failed.
   
     # egrep -i "raid|root" /var/run/dmesg.boot      # egrep -i "raid|root" /var/run/dmesg.boot
Line 830  fully functional because the bogus drive Line 830  fully functional because the bogus drive
   
 ### Adding Disk0/wd0 to RAID  ### Adding Disk0/wd0 to RAID
   
 We will now add Disk0/wd0 as a component of the RAID. This will destroy th  We will now add Disk0/wd0 as a component of the RAID. This will destroy the
 original file system structure. On i386, the MBR disklabel will be unaffecte  original file system structure. On i386, the MBR disklabel will be unaffected
 (remember we copied wd0's label to wd1 anyway) , therefore there is no need t  (remember we copied wd0's label to wd1 anyway) , therefore there is no need to
 "zero" Disk0/wd0. However, we need to relabel Disk0/wd0 to have an identica  "zero" Disk0/wd0. However, we need to relabel Disk0/wd0 to have an identical
 NetBSD disklabel layout as Disk1/wd1. Then we add Disk0/wd0 as "hot-spare" t  NetBSD disklabel layout as Disk1/wd1. Then we add Disk0/wd0 as "hot-spare" to
 the RAID set and initiate the parity reconstruction for all RAID devices  the RAID set and initiate the parity reconstruction for all RAID devices,
 effectively bringing Disk0/wd0 into the RAID-1 set and "syncing up" both disks.  effectively bringing Disk0/wd0 into the RAID-1 set and "syncing up" both disks.
   
     # disklabel -r wd1 > /tmp/disklabel.wd1      # disklabel -r wd1 > /tmp/disklabel.wd1
     # disklabel -R -r wd0 /tmp/disklabel.wd1      # disklabel -R -r wd0 /tmp/disklabel.wd1
   
 As a last-minute sanity check, you might want to us  As a last-minute sanity check, you might want to use
 [diff(1)](http://netbsd.gw.com/cgi-bin/man-cgi?diff+1+NetBSD-5.0.1+i386) t  [diff(1)](http://netbsd.gw.com/cgi-bin/man-cgi?diff+1+NetBSD-5.0.1+i386) to
 ensure that the disklabels of Disk0/wd0 match Disk1/wd1. You should also backu  ensure that the disklabels of Disk0/wd0 match Disk1/wd1. You should also backup
 these files for reference in the event of an emergency.  these files for reference in the event of an emergency.
   
     # disklabel -r wd0 > /tmp/disklabel.wd0      # disklabel -r wd0 > /tmp/disklabel.wd0
Line 870  Once you are sure, add Disk0/wd0 as a sp Line 870  Once you are sure, add Disk0/wd0 as a sp
     RECON: initiating reconstruction on col 0 -> spare at col 2      RECON: initiating reconstruction on col 0 -> spare at col 2
      11% |****                                   | ETA:    04:26 \       11% |****                                   | ETA:    04:26 \
   
 Depending on the speed of your hardware, the reconstruction time will vary. Yo  Depending on the speed of your hardware, the reconstruction time will vary. You
 may wish to watch it on another terminal (note that you can interrupt  may wish to watch it on another terminal (note that you can interrupt
 `raidctl -S` any time without stopping the synchronisation):  `raidctl -S` any time without stopping the synchronisation):
   
Line 899  After reconstruction, both disks should  Line 899  After reconstruction, both disks should 
          /dev/wd0a: used_spare           /dev/wd0a: used_spare
          [...snip...]           [...snip...]
   
 When the reconstruction is finished we need to install the boot loader on th  When the reconstruction is finished we need to install the boot loader on the
 Disk0/wd0. On i386, install the boot loader into `/dev/rwd0a`:  Disk0/wd0. On i386, install the boot loader into `/dev/rwd0a`:
   
     # /usr/sbin/installboot -o timeout=15 -v /dev/rwd0a /usr/mdec/bootxx_ffsv1      # /usr/sbin/installboot -o timeout=15 -v /dev/rwd0a /usr/mdec/bootxx_ffsv1
Line 916  On sparc64: Line 916  On sparc64:
     Bootstrap byte count:   5140      Bootstrap byte count:   5140
     Writing bootstrap      Writing bootstrap
   
 And finally, reboot the machine one last time before proceeding. This i  And finally, reboot the machine one last time before proceeding. This is
 required to migrate Disk0/wd0 from status "used\_spare" as "Component0" to stat  required to migrate Disk0/wd0 from status "used\_spare" as "Component0" to state
 "optimal". Refer to notes in the next section regarding verification of clea  "optimal". Refer to notes in the next section regarding verification of clean
 parity after each reboot.  parity after each reboot.
   
     # shutdown -r now      # shutdown -r now
   
 ### Testing Boot Blocks  ### Testing Boot Blocks
   
 At this point, you need to ensure that your system's hardware can properly boo  At this point, you need to ensure that your system's hardware can properly boot
 using the boot blocks on either disk. On i386, this is a hardware-dependen  using the boot blocks on either disk. On i386, this is a hardware-dependent
 process that may be done via your motherboard CMOS/BIOS menu or your controlle  process that may be done via your motherboard CMOS/BIOS menu or your controller
 card's configuration menu.  card's configuration menu.
   
 On i386, use the menu system on your machine to set the boot device order   On i386, use the menu system on your machine to set the boot device order /
 priority to Disk1/wd1 before Disk0/wd0. The examples here depict a generic Awar  priority to Disk1/wd1 before Disk0/wd0. The examples here depict a generic Award
   
 BIOS.  BIOS.
   
 ![Award BIOS i386 Boot Disk1/wd1](/guide/images/raidframe_awardbios2.png)  ![Award BIOS i386 Boot Disk1/wd1](/guide/images/raidframe_awardbios2.png)
   
 **Award BIOS i386 Boot Disk1/wd1**  **Award BIOS i386 Boot Disk1/wd1**
   
 Save changes and exit:  Save changes and exit:
Line 947  Save changes and exit: Line 947  Save changes and exit:
     booting hd0a:netbsd - starting in 30      booting hd0a:netbsd - starting in 30
   
 You can determine that the BIOS is reading Disk1/wd1 because the timeout of th  You can determine that the BIOS is reading Disk1/wd1 because the timeout of th
   
 boot loader is 30 seconds instead of 15. After the reboot, re-enter the BIOS an  boot loader is 30 seconds instead of 15. After the reboot, re-enter the BIOS an
 configure the drive boot order back to the default:  configure the drive boot order back to the default:
   
 ![Award BIOS i386 Boot Disk0/wd0](/guide/images/raidframe_awardbios1.png)  ![Award BIOS i386 Boot Disk0/wd0](/guide/images/raidframe_awardbios1.png)
   
 **Award BIOS i386 Boot Disk0/wd0**  **Award BIOS i386 Boot Disk0/wd0**
   
 Save changes and exit:  Save changes and exit:
Line 961  Save changes and exit: Line 961  Save changes and exit:
     Press return to boot now, any other key for boot menu      Press return to boot now, any other key for boot menu
     booting hd0a:netbsd - starting in 15      booting hd0a:netbsd - starting in 15
   
 Notice how your custom kernel detects controller/bus/drive assignment  Notice how your custom kernel detects controller/bus/drive assignments
 independent of what the BIOS assigns as the boot disk. This is the expecte  independent of what the BIOS assigns as the boot disk. This is the expected
 behavior.  behavior.
   
 On sparc64, use the Sun OpenBoot **devalias** to confirm that both disks are bootable:  On sparc64, use the Sun OpenBoot **devalias** to confirm that both disks are bootable:
Line 1015  And the second disk: Line 1015  And the second disk:
          The Regents of the University of California.  All rights reserved.           The Regents of the University of California.  All rights reserved.
     [...snip...]      [...snip...]
   
 At each boot, the following should appear in the NetBSD kerne  At each boot, the following should appear in the NetBSD kernel
 [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386) :  [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386) :
   
     Kernelized RAIDframe activated      Kernelized RAIDframe activated
Line 1026  At each boot, the following should appea Line 1026  At each boot, the following should appea
     root on raid0a dumps on raid0b      root on raid0a dumps on raid0b
     root file system type: ffs      root file system type: ffs
   
 Once you are certain that both disks are bootable, verify the RAID parity i  Once you are certain that both disks are bootable, verify the RAID parity is
 clean after each reboot:  clean after each reboot:
   
     # raidctl -v -s raid0      # raidctl -v -s raid0

Removed from v.1.5  
changed lines
  Added in v.1.6


CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb