Diff for /wikisrc/guide/raidframe.mdwn between versions 1.4 and 1.5

version 1.4, 2013/03/02 13:13:34 version 1.5, 2013/03/02 13:16:08
Line 4 Line 4
   
 ### About RAIDframe  ### About RAIDframe
   
 NetBSD uses the [CMU RAIDframe](http://www.pdl.cmu.edu/RAIDframe/) software for   NetBSD uses the [CMU RAIDframe](http://www.pdl.cmu.edu/RAIDframe/) software fo
 its RAID subsystem. NetBSD is the primary platform for RAIDframe development.   its RAID subsystem. NetBSD is the primary platform for RAIDframe development
 RAIDframe can also be found in older versions of FreeBSD and OpenBSD. NetBSD   RAIDframe can also be found in older versions of FreeBSD and OpenBSD. NetBS
 also has another way of bundling disks, the   also has another way of bundling disks, th
 [ccd(4)](http://netbsd.gw.com/cgi-bin/man-cgi?ccd+4+NetBSD-5.0.1+i386) subsystem   [ccd(4)](http://netbsd.gw.com/cgi-bin/man-cgi?ccd+4+NetBSD-5.0.1+i386) subsyste
 (see [Concatenated Disk Device](/guide/ccd)). You should possess some [basic   (see [Concatenated Disk Device](/guide/ccd)). You should possess some [basi
 knowledge](http://www.acnc.com/04_00.html) about RAID concepts and terminology   knowledge](http://www.acnc.com/04_00.html) about RAID concepts and terminolog
 before continuing. You should also be at least familiar with the different   before continuing. You should also be at least familiar with the differen
 levels of RAID - Adaptec provides an [excellent   levels of RAID - Adaptec provides an [excellen
 reference](http://www.adaptec.com/en-US/_common/compatibility/_education/RAID_level_compar_wp.htm),   reference](http://www.adaptec.com/en-US/_common/compatibility/_education/RAID_level_compar_wp.htm)
 and the [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386)   and the [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386
 manpage contains a short overview too.  manpage contains a short overview too.
   
 ### A warning about Data Integrity, Backups, and High Availability  ### A warning about Data Integrity, Backups, and High Availability
   
 RAIDframe is a Software RAID implementation, as opposed to Hardware RAID. As   RAIDframe is a Software RAID implementation, as opposed to Hardware RAID. A
 such, it does not need special disk controllers supported by NetBSD. System   such, it does not need special disk controllers supported by NetBSD. Syste
 administrators should give a great deal of consideration to whether software   administrators should give a great deal of consideration to whether softwar
 RAID or hardware RAID is more appropriate for their "Mission Critical"   RAID or hardware RAID is more appropriate for their "Mission Critical
 applications. For some projects you might consider the use of many of the   applications. For some projects you might consider the use of many of th
 hardware RAID devices [supported by   hardware RAID devices [supported b
 NetBSD](http://www.NetBSD.org/support/hardware/). It is truly at your discretion   NetBSD](http://www.NetBSD.org/support/hardware/). It is truly at your discretio
 what type of RAID you use, but it is recommend that you consider factors such   what type of RAID you use, but it is recommend that you consider factors suc
 as: manageability, commercial vendor support, load-balancing and failover, etc.  as: manageability, commercial vendor support, load-balancing and failover, etc.
   
 Depending on the RAID level used, RAIDframe does provide redundancy in the event   Depending on the RAID level used, RAIDframe does provide redundancy in the even
 of a hardware failure. However, it is *not* a replacement for reliable backups!   of a hardware failure. However, it is *not* a replacement for reliable backups
 Software and user-error can still cause data loss. RAIDframe may be used as a   Software and user-error can still cause data loss. RAIDframe may be used as 
 mechanism for facilitating backups in systems without backup hardware, but this   mechanism for facilitating backups in systems without backup hardware, but thi
 is not an ideal configuration. Finally, with regard to "high availability", RAID   is not an ideal configuration. Finally, with regard to "high availability", RAI
 is only a very small component to ensuring data availability.  is only a very small component to ensuring data availability.
   
 Once more for good measure: *Back up your data!*  Once more for good measure: *Back up your data!*
   
 ### Hardware versus Software RAID  ### Hardware versus Software RAID
   
 If you run a server, it will most probably already have a Hardware RAID   If you run a server, it will most probably already have a Hardware RAI
 controller. There are reasons for and against using a Software RAID, depending   controller. There are reasons for and against using a Software RAID, dependin
 on the scenario.  on the scenario.
   
 In general, a Software RAID is well suited for low-IO system disks. If you run a   In general, a Software RAID is well suited for low-IO system disks. If you run 
 Software RAID, you can exchange disks and disk controllers, or even move the   Software RAID, you can exchange disks and disk controllers, or even move th
 disks to a completely different machine. The computational overhead for the RAID   disks to a completely different machine. The computational overhead for the RAI
 is negligible if there is only few disk IO operations.  is negligible if there is only few disk IO operations.
   
 If you need much IO, you should use a Hardware RAID. With a Software RAID, the   If you need much IO, you should use a Hardware RAID. With a Software RAID, th
 redundancy data has to be transferred via the bus your disk controller is   redundancy data has to be transferred via the bus your disk controller i
 connected to. With a Hardware RAID, you transfer data only once - the redundancy   connected to. With a Hardware RAID, you transfer data only once - the redundanc
 computation and transfer is done by the controller.  computation and transfer is done by the controller.
   
 ### Getting Help  ### Getting Help
   
 If you encounter problems using RAIDframe, you have several options for   If you encounter problems using RAIDframe, you have several options fo
 obtaining help.  obtaining help.
   
  1. Read the RAIDframe man pages:    1. Read the RAIDframe man pages
     [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386) and       [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386) an
     [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386)       [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386
     thoroughly.      thoroughly.
   
  2. Search the mailing list archives. Unfortunately, there is no NetBSD list    2. Search the mailing list archives. Unfortunately, there is no NetBSD lis
     dedicated to RAIDframe support. Depending on the nature of the problem, posts      dedicated to RAIDframe support. Depending on the nature of the problem, posts
     tend to end up in a variety of lists. At a very minimum, search      tend to end up in a variety of lists. At a very minimum, search
     [netbsd-help](http://mail-index.NetBSD.org/netbsd-help/),      [netbsd-help](http://mail-index.NetBSD.org/netbsd-help/),
Line 75  obtaining help. Line 75  obtaining help.
   
     ### Caution      ### Caution
   
         Because RAIDframe is constantly undergoing development, some information in           Because RAIDframe is constantly undergoing development, some information i
         mailing list archives has the potential of being dated and inaccurate.          mailing list archives has the potential of being dated and inaccurate.
   
  3. Search the [Problem Report    3. Search the [Problem Repor
     database](http://www.NetBSD.org/support/send-pr.html).      database](http://www.NetBSD.org/support/send-pr.html).
   
  4. If your problem persists: Post to the mailing list most appropriate    4. If your problem persists: Post to the mailing list most appropriat
     (judgment call). Collect as much verbosely detailed information as possible       (judgment call). Collect as much verbosely detailed information as possibl
     before posting: Include your       before posting: Include you
     [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386)       [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386
     output from `/var/run/dmesg.boot`, your kernel       output from `/var/run/dmesg.boot`, your kerne
     [config(5)](http://netbsd.gw.com/cgi-bin/man-cgi?config+5+NetBSD-5.0.1+i386)      [config(5)](http://netbsd.gw.com/cgi-bin/man-cgi?config+5+NetBSD-5.0.1+i386) 
     your `/etc/raid[0-9].conf`, any relevant errors on `/dev/console`,       your `/etc/raid[0-9].conf`, any relevant errors on `/dev/console`
     `/var/log/messages`, or to `stdout/stderr` of       `/var/log/messages`, or to `stdout/stderr` o
     [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386).       [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386)
     The output of **raidctl -s** (if available) will be useful as well. Also       The output of **raidctl -s** (if available) will be useful as well. Als
     include details on the troubleshooting steps you've taken thus far, exactly       include details on the troubleshooting steps you've taken thus far, exactl
     when the problem started, and any notes on recent changes that may have       when the problem started, and any notes on recent changes that may hav
     prompted the problem to develop. Remember to be patient when waiting for a       prompted the problem to develop. Remember to be patient when waiting for 
     response.      response.
   
 ## Setup RAIDframe Support  ## Setup RAIDframe Support
Line 102  The use of RAID will require software an Line 102  The use of RAID will require software an
   
 ### Kernel Support  ### Kernel Support
   
 The GENERIC kernel already has support for RAIDframe. If you have built a custom   The GENERIC kernel already has support for RAIDframe. If you have built a custo
 kernel for your environment the kernel configuration must have the following   kernel for your environment the kernel configuration must have the followin
 options:  options:
   
     pseudo-device   raid            8       # RAIDframe disk driver      pseudo-device   raid            8       # RAIDframe disk driver
     options         RAID_AUTOCONFIG         # auto-configuration of RAID components      options         RAID_AUTOCONFIG         # auto-configuration of RAID components
   
 The RAID support must be detected by the NetBSD kernel, which can be checked by   The RAID support must be detected by the NetBSD kernel, which can be checked b
 looking at the output of the   looking at the output of th
 [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386)   [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386
 command.  command.
   
     # dmesg|grep -i raid      # dmesg|grep -i raid
     Kernelized RAIDframe activated      Kernelized RAIDframe activated
   
 Historically, the kernel must also contain static mappings between bus addresses   Historically, the kernel must also contain static mappings between bus addresse
 and device nodes in `/dev`. This used to ensure consistency of devices within   and device nodes in `/dev`. This used to ensure consistency of devices withi
 RAID sets in the event of a device failure after reboot. Since NetBSD 1.6,   RAID sets in the event of a device failure after reboot. Since NetBSD 1.6
 however, using the auto-configuration features of RAIDframe has been recommended   however, using the auto-configuration features of RAIDframe has been recommende
 over statically mapping devices. The auto-configuration features allow drives to   over statically mapping devices. The auto-configuration features allow drives t
 move around on the system, and RAIDframe will automatically determine which   move around on the system, and RAIDframe will automatically determine whic
 components belong to which RAID sets.  components belong to which RAID sets.
   
 ### Power Redundancy and Disk Caching  ### Power Redundancy and Disk Caching
   
 If your system has an Uninterruptible Power Supply (UPS), if your system has   If your system has an Uninterruptible Power Supply (UPS), if your system ha
 redundant power supplies, or your disk controller has a battery, you should   redundant power supplies, or your disk controller has a battery, you shoul
 consider enabling the read and write caches on your drives. On systems with   consider enabling the read and write caches on your drives. On systems wit
 redundant power, this will improve drive performance. On systems without   redundant power, this will improve drive performance. On systems withou
 redundant power, the write cache could endanger the integrity of RAID data in   redundant power, the write cache could endanger the integrity of RAID data i
 the event of a power loss.  the event of a power loss.
   
 The [dkctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dkctl+8+NetBSD-5.0.1+i386)   The [dkctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dkctl+8+NetBSD-5.0.1+i386
 utility to can be used for this on all kinds of disks that support the operation   utility to can be used for this on all kinds of disks that support the operatio
 (SCSI, EIDE, SATA, ...):  (SCSI, EIDE, SATA, ...):
   
     # dkctl wd0 getcache      # dkctl wd0 getcache
Line 153  utility to can be used for this on all k Line 153  utility to can be used for this on all k
   
 ## Example: RAID-1 Root Disk  ## Example: RAID-1 Root Disk
   
 This example explains how to setup RAID-1 root disk. With RAID-1 components are   This example explains how to setup RAID-1 root disk. With RAID-1 components ar
 mirrored and therefore the server can be fully functional in the event of a   mirrored and therefore the server can be fully functional in the event of 
 single component failure. The goal is to provide a level of redundancy that will   single component failure. The goal is to provide a level of redundancy that wil
 allow the system to encounter a component failure on either component disk in   allow the system to encounter a component failure on either component disk i
 the RAID and:  the RAID and:
   
  * Continue normal operations until a maintenance window can be scheduled.   * Continue normal operations until a maintenance window can be scheduled.
  * Or, in the unlikely event that the component failure causes a system reboot,    * Or, in the unlikely event that the component failure causes a system reboot
    be able to quickly reconfigure the system to boot from the remaining     be able to quickly reconfigure the system to boot from the remaining
    component (platform dependent).     component (platform dependent).
   
 ![RAID-1 Disk Logical Layout](/guide/images/raidframe_raidL1-diskdia.png)  ![RAID-1 Disk Logical Layout](/guide/images/raidframe_raidl1-diskdia.png)
   
 **RAID-1 Disk Logical Layout**  **RAID-1 Disk Logical Layout**
   
 Because RAID-1 provides both redundancy and performance improvements, its most   Because RAID-1 provides both redundancy and performance improvements, its mos
 practical application is on critical "system" partitions such as `/`, `/usr`,   practical application is on critical "system" partitions such as `/`, `/usr`
 `/var`, `swap`, etc., where read operations are more frequent than write   `/var`, `swap`, etc., where read operations are more frequent than writ
 operations. For other file systems, such as `/home` or `/var/`, other RAID   operations. For other file systems, such as `/home` or `/var/`, other RAI
 levels might be considered (see the references above). If one were simply   levels might be considered (see the references above). If one were simpl
 creating a generic RAID-1 volume for a non-root file system, the cookie-cutter   creating a generic RAID-1 volume for a non-root file system, the cookie-cutte
 examples from the man page could be followed, but because the root volume must   examples from the man page could be followed, but because the root volume mus
 be bootable, certain special steps must be taken during initial setup.  be bootable, certain special steps must be taken during initial setup.
   
 *Note*: This example will outline a process that differs only slightly between   *Note*: This example will outline a process that differs only slightly betwee
 the i386 and sparc64 platforms. In an attempt to reduce excessive duplication of   the i386 and sparc64 platforms. In an attempt to reduce excessive duplication o
 content, where differences do exist and are cosmetic in nature, they will be   content, where differences do exist and are cosmetic in nature, they will b
 pointed out using a section such as this. If the process is drastically   pointed out using a section such as this. If the process is drasticall
 different, the process will branch into separate, platform dependent steps.  different, the process will branch into separate, platform dependent steps.
   
 ### Pseudo-Process Outline  ### Pseudo-Process Outline
   
 Although a much more refined process could be developed using a custom copy of   Although a much more refined process could be developed using a custom copy o
 NetBSD installed on custom-developed removable media, presently the NetBSD   NetBSD installed on custom-developed removable media, presently the NetBS
 install media lacks RAIDframe tools and support, so the following pseudo process  install media lacks RAIDframe tools and support, so the following pseudo process
 has become the de facto standard for setting up RAID-1 Root.  has become the de facto standard for setting up RAID-1 Root.
   
  1. Install a stock NetBSD onto Disk0 of your system.   1. Install a stock NetBSD onto Disk0 of your system.
   
     ![Perform generic install onto Disk0/wd0](/guide/images/raidframe_r1r-pp1.png)      ![Perform generic install onto Disk0/wd0](/guide/images/raidframe_r1r-pp1.png)
   
     **Perform generic install onto Disk0/wd0**      **Perform generic install onto Disk0/wd0**
   
  2. Use the installed system on Disk0/wd0 to setup a RAID Set composed of    2. Use the installed system on Disk0/wd0 to setup a RAID Set composed o
     Disk1/wd1 only.      Disk1/wd1 only.
   
     ![Setup RAID Set](raidframe_r1r-pp2.png)      ![Setup RAID Set](raidframe_r1r-pp2.png)
Line 203  has become the de facto standard for set Line 205  has become the de facto standard for set
  3. Reboot the system off the Disk1/wd1 with the newly created RAID volume.   3. Reboot the system off the Disk1/wd1 with the newly created RAID volume.
   
     ![Reboot using Disk1/wd1 of RAID](/guide/images/raidframe_r1r-pp3.png)      ![Reboot using Disk1/wd1 of RAID](/guide/images/raidframe_r1r-pp3.png)
   
     **Reboot using Disk1/wd1 of RAID**      **Reboot using Disk1/wd1 of RAID**
   
  4. Add / re-sync Disk0/wd0 back into the RAID set.   4. Add / re-sync Disk0/wd0 back into the RAID set.
   
     ![Mirror Disk1/wd1 back to Disk0/wd0](/guide/images/raidframe_r1r-pp4.png)      ![Mirror Disk1/wd1 back to Disk0/wd0](/guide/images/raidframe_r1r-pp4.png)
   
     **Mirror Disk1/wd1 back to Disk0/wd0**      **Mirror Disk1/wd1 back to Disk0/wd0**
   
 ### Hardware Review  ### Hardware Review
   
 At present, the alpha, amd64, i386, pmax, sparc, sparc64, and vax NetBSD   At present, the alpha, amd64, i386, pmax, sparc, sparc64, and vax NetBS
 platforms support booting from RAID-1. Booting is not supported from any other   platforms support booting from RAID-1. Booting is not supported from any othe
 RAID level. Booting from a RAID set is accomplished by teaching the 1st stage   RAID level. Booting from a RAID set is accomplished by teaching the 1st stag
 boot loader to understand both 4.2BSD/FFS and RAID partitions. The 1st boot   boot loader to understand both 4.2BSD/FFS and RAID partitions. The 1st boo
 block code only needs to know enough about the disk partitions and file systems   block code only needs to know enough about the disk partitions and file system
 to be able to read the 2nd stage boot blocks. Therefore, at any time, the   to be able to read the 2nd stage boot blocks. Therefore, at any time, th
 system's BIOS / firmware must be able to read a drive with 1st stage boot blocks   system's BIOS / firmware must be able to read a drive with 1st stage boot block
 installed. On the i386 platform, configuring this is entirely dependent on the   installed. On the i386 platform, configuring this is entirely dependent on th
 vendor of the controller card / host bus adapter to which your disks are   vendor of the controller card / host bus adapter to which your disks ar
 connected. On sparc64 this is controlled by the IEEE 1275 Sun OpenBoot Firmware.  connected. On sparc64 this is controlled by the IEEE 1275 Sun OpenBoot Firmware.
   
 This article assumes two identical IDE disks (`/dev/wd{0,1}`) which we are going   This article assumes two identical IDE disks (`/dev/wd{0,1}`) which we are goin
 to mirror (RAID-1). These disks are identified as:  to mirror (RAID-1). These disks are identified as:
   
     # grep ^wd /var/run/dmesg.boot      # grep ^wd /var/run/dmesg.boot
Line 239  to mirror (RAID-1). These disks are iden Line 243  to mirror (RAID-1). These disks are iden
     wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)      wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
     wd1(piixide0:1:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA data transfers)      wd1(piixide0:1:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA data transfers)
   
 *Note*: If you are using SCSI, replace `/dev/{,r}wd{0,1}` with   *Note*: If you are using SCSI, replace `/dev/[r]wd{0,1}` with `/dev/[r]sd{0,1}`.
 `/dev/{,r}sd{0,1}`.  
   
 In this example, both disks are jumpered as Master on separate channels on the   In this example, both disks are jumpered as Master on separate channels on th
 same controller. You usually wouldn't want to have both disks on the same bus on   same controller. You usually wouldn't want to have both disks on the same bus o
 the same controller; this creates a single point of failure. Ideally you would   the same controller; this creates a single point of failure. Ideally you woul
 have the disks on separate channels on separate controllers. Nonetheless, in   have the disks on separate channels on separate controllers. Nonetheless, i
 most cases the most critical point is the hard disk, so having redundant   most cases the most critical point is the hard disk, so having redundan
 channels or controllers is not that important. Plus, having more channels or   channels or controllers is not that important. Plus, having more channels o
 controllers increases costs. Some SCSI controllers have multiple channels on the   controllers increases costs. Some SCSI controllers have multiple channels on th
 same controller, however, a SCSI bus reset on one channel could adversely affect   same controller, however, a SCSI bus reset on one channel could adversely affec
 the other channel if the ASIC/IC becomes overloaded. The trade-off with two   the other channel if the ASIC/IC becomes overloaded. The trade-off with tw
 controllers is that twice the bandwidth is used on the system bus. For purposes   controllers is that twice the bandwidth is used on the system bus. For purpose
 of simplification, this example shows two disks on different channels on the   of simplification, this example shows two disks on different channels on th
 same controller.  same controller.
   
 *Note*: RAIDframe requires that all components be of the same size. Actually, it   *Note*: RAIDframe requires that all components be of the same size. Actually, i
 will use the lowest common denominator among components of dissimilar sizes. For   will use the lowest common denominator among components of dissimilar sizes. Fo
 purposes of illustration, the example uses two disks of identical geometries.   purposes of illustration, the example uses two disks of identical geometries
 Also, consider the availability of replacement disks if a component suffers a   Also, consider the availability of replacement disks if a component suffers 
 critical hardware failure.  critical hardware failure.
   
 *Tip*: Two disks of identical vendor model numbers could have different   *Tip*: Two disks of identical vendor model numbers could have differen
 geometries if the drive possesses "grown defects". Use a low-level program to   geometries if the drive possesses "grown defects". Use a low-level program t
 examine the grown defects table of the disk. These disks are obviously   examine the grown defects table of the disk. These disks are obviousl
 suboptimal candidates for use in RAID and should be avoided.  suboptimal candidates for use in RAID and should be avoided.
   
 ### Initial Install on Disk0/wd0  ### Initial Install on Disk0/wd0
   
 Perform a very generic installation onto your Disk0/wd0. Follow the `INSTALL`   Perform a very generic installation onto your Disk0/wd0. Follow the `INSTALL
 instructions for your platform. Install all the sets but do not bother   instructions for your platform. Install all the sets but do not bothe
 customizing anything other than the kernel as it will be overwritten.  customizing anything other than the kernel as it will be overwritten.
   
 *Tip*: On i386, during the sysinst install, when prompted if you want to `use   *Tip*: On i386, during the sysinst install, when prompted if you want to `us
 the entire disk for NetBSD`, answer `yes`.  the entire disk for NetBSD`, answer `yes`.
   
  * [Installing NetBSD: Preliminary considerations and preparations](/guide/inst)   * [Installing NetBSD: Preliminary considerations and preparations](/guide/inst)
  * [NetBSD/i386 Install Directions](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/i386/INSTALL.html)   * [NetBSD/i386 Install Directions](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/i386/INSTALL.html)
  * [NetBSD/sparc64 Install Directions](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/sparc64/INSTALL.html)   * [NetBSD/sparc64 Install Directions](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/sparc64/INSTALL.html)
   
 Once the installation is complete, you should examine the   Once the installation is complete, you should examine th
 [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)   [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386
 and [fdisk(8)](http://netbsd.gw.com/cgi-bin/man-cgi?fdisk+8+NetBSD-5.0.1+i386)  and [fdisk(8)](http://netbsd.gw.com/cgi-bin/man-cgi?fdisk+8+NetBSD-5.0.1+i386) 
 [sunlabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?sunlabel+8+NetBSD-5.0.1+i386)   [sunlabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?sunlabel+8+NetBSD-5.0.1+i386
 outputs on the system:  outputs on the system:
   
     # df      # df
Line 356  On Sparc64 the command and output differ Line 359  On Sparc64 the command and output differ
   
 ### Preparing Disk1/wd1  ### Preparing Disk1/wd1
   
 Once you have a stock install of NetBSD on Disk0/wd0, you are ready to begin.   Once you have a stock install of NetBSD on Disk0/wd0, you are ready to begin
 Disk1/wd1 will be visible and unused by the system. To setup Disk1/wd1, you will   Disk1/wd1 will be visible and unused by the system. To setup Disk1/wd1, you wil
 use   us
 [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)   [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386
 to allocate the entire second disk to the RAID-1 set.  to allocate the entire second disk to the RAID-1 set.
   
 *Tip*: The best way to ensure that Disk1/wd1 is completely empty is to 'zero'   *Tip*: The best way to ensure that Disk1/wd1 is completely empty is to 'zero
 out the first few sectors of the disk with   out the first few sectors of the disk wit
 [dd(1)](http://netbsd.gw.com/cgi-bin/man-cgi?dd+1+NetBSD-5.0.1+i386) . This will   [dd(1)](http://netbsd.gw.com/cgi-bin/man-cgi?dd+1+NetBSD-5.0.1+i386) . This wil
 erase the MBR (i386) or Sun disk label (sparc64), as well as the NetBSD disk   erase the MBR (i386) or Sun disk label (sparc64), as well as the NetBSD dis
 label. If you make a mistake at any point during the RAID setup process, you can   label. If you make a mistake at any point during the RAID setup process, you ca
 always refer to this process to restore the disk to an empty state.  always refer to this process to restore the disk to an empty state.
   
 *Note*: On sparc64, use `/dev/rwd1c` instead of `/dev/rwd1d`!  *Note*: On sparc64, use `/dev/rwd1c` instead of `/dev/rwd1d`!
Line 376  always refer to this process to restore  Line 379  always refer to this process to restore 
     1+0 records out      1+0 records out
     8192 bytes transferred in 0.003 secs (2730666 bytes/sec)      8192 bytes transferred in 0.003 secs (2730666 bytes/sec)
   
 Once this is complete, on i386, verify that both the MBR and NetBSD disk labels   Once this is complete, on i386, verify that both the MBR and NetBSD disk label
 are gone. On sparc64, verify that the Sun Disk label is gone as well.  are gone. On sparc64, verify that the Sun Disk label is gone as well.
   
 On i386:  On i386:
Line 423  On sparc64: Line 426  On sparc64:
     disklabel: boot block size 0      disklabel: boot block size 0
     disklabel: super block size 0      disklabel: super block size 0
   
 Now that you are certain the second disk is empty, on i386 you must establish   Now that you are certain the second disk is empty, on i386 you must establis
 the MBR on the second disk using the values obtained from Disk0/wd0 above. We   the MBR on the second disk using the values obtained from Disk0/wd0 above. W
 must remember to mark the NetBSD partition active or the system will not boot.   must remember to mark the NetBSD partition active or the system will not boot
 You must also create a NetBSD disklabel on Disk1/wd1 that will enable a RAID   You must also create a NetBSD disklabel on Disk1/wd1 that will enable a RAI
 volume to exist upon it. On sparc64, you will need to simply   volume to exist upon it. On sparc64, you will need to simpl
 [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)   [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386
 the second disk which will write the proper Sun Disk Label.  the second disk which will write the proper Sun Disk Label.
   
 *Tip*:   *Tip*
 [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)   [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386
 will use your shell' s environment variable `$EDITOR` variable to edit the   will use your shell' s environment variable `$EDITOR` variable to edit th
 disklabel. The default is   disklabel. The default i
 [vi(1)](http://netbsd.gw.com/cgi-bin/man-cgi?vi+1+NetBSD-5.0.1+i386)  [vi(1)](http://netbsd.gw.com/cgi-bin/man-cgi?vi+1+NetBSD-5.0.1+i386)
   
 On i386:  On i386:
Line 512  On sparc64: Line 515  On sparc64:
      a:  19541088         0       RAID                     # (Cyl.      0 -  19385)       a:  19541088         0       RAID                     # (Cyl.      0 -  19385)
      c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)       c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)
           
     # sunlabel /dev/rwd1c       # sunlabel /dev/rwd1
     sunlabel> P      sunlabel> P
     a: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)      a: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
     c: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)      c: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
   
 *Note*: On i386, the `c:` and `d:` slices are reserved. `c:` represents the   *Note*: On i386, the `c:` and `d:` slices are reserved. `c:` represents th
 NetBSD portion of the disk. `d:` represents the entire disk. Because we want to   NetBSD portion of the disk. `d:` represents the entire disk. Because we want t
 allocate the entire NetBSD MBR partition to RAID, and because `a:` resides   allocate the entire NetBSD MBR partition to RAID, and because `a:` reside
 within the bounds of `c:`, the `a:` and `c:` slices have same size and offset   within the bounds of `c:`, the `a:` and `c:` slices have same size and offse
 values and sizes. The offset must start at a track boundary (an increment of   values and sizes. The offset must start at a track boundary (an increment o
 sectors matching the sectors/track value in the disk label). On sparc64 however,   sectors matching the sectors/track value in the disk label). On sparc64 however
 `c:` represents the entire NetBSD partition in the Sun disk label and `d:` is   `c:` represents the entire NetBSD partition in the Sun disk label and `d:` i
 not reserved. Also note that sparc64's `c:` and `a:` require no offset from the   not reserved. Also note that sparc64's `c:` and `a:` require no offset from th
 beginning of the disk, however if they should need to be, the offset must start   beginning of the disk, however if they should need to be, the offset must star
 at a cylinder boundary (an increment of sectors matching the sectors/cylinder   at a cylinder boundary (an increment of sectors matching the sectors/cylinde
 value).  value).
   
 ### Initializing the RAID Device  ### Initializing the RAID Device
   
 Next we create the configuration file for the RAID set / volume. Traditionally,   Next we create the configuration file for the RAID set / volume. Traditionally
 RAIDframe configuration files belong in `/etc` and would be read and initialized   RAIDframe configuration files belong in `/etc` and would be read and initialize
 at boot time, however, because we are creating a bootable RAID volume, the   at boot time, however, because we are creating a bootable RAID volume, th
 configuration data will actually be written into the RAID volume using the   configuration data will actually be written into the RAID volume using th
 *auto-configure* feature. Therefore, files are needed only during the initial   *auto-configure* feature. Therefore, files are needed only during the initia
 setup and should not reside in `/etc`.  setup and should not reside in `/etc`.
   
     # vi /var/tmp/raid0.conf      # vi /var/tmp/raid0.conf
Line 552  setup and should not reside in `/etc`. Line 555  setup and should not reside in `/etc`.
     START queue      START queue
     fifo 100      fifo 100
   
 Note that `absent` means a non-existing disk. This will allow us to establish   Note that `absent` means a non-existing disk. This will allow us to establis
 the RAID volume with a bogus component that we will substitute for Disk0/wd0 at   the RAID volume with a bogus component that we will substitute for Disk0/wd0 a
 a later time.  a later time.
   
 Next we configure the RAID device and initialize the serial number to something   Next we configure the RAID device and initialize the serial number to somethin
 unique. In this example we use a "YYYYMMDD*`Revision`*" scheme. The format you   unique. In this example we use a "YYYYMMDD*`Revision`*" scheme. The format yo
 choose is entirely at your discretion, however the scheme you choose should   choose is entirely at your discretion, however the scheme you choose shoul
 ensure that no two RAID sets use the same serial number at the same time.  ensure that no two RAID sets use the same serial number at the same time.
   
 After that we initialize the RAID set for the first time, safely ignoring the   After that we initialize the RAID set for the first time, safely ignoring th
 errors regarding the bogus component.  errors regarding the bogus component.
   
     # raidctl -v -C /var/tmp/raid0.conf raid0      # raidctl -v -C /var/tmp/raid0.conf raid0
Line 615  errors regarding the bogus component. Line 618  errors regarding the bogus component.
   
 ### Setting up Filesystems  ### Setting up Filesystems
   
 *Caution*: The root filesystem must begin at sector 0 of the RAID device. If   *Caution*: The root filesystem must begin at sector 0 of the RAID device. I
 not, the primary boot loader will be unable to find the secondary boot loader.  not, the primary boot loader will be unable to find the secondary boot loader.
   
 The RAID device is now configured and available. The RAID device is a pseudo   The RAID device is now configured and available. The RAID device is a pseud
 disk-device. It will be created with a default disk label. You must now   disk-device. It will be created with a default disk label. You must no
 determine the proper sizes for disklabel slices for your production environment.   determine the proper sizes for disklabel slices for your production environment
 For purposes of simplification in this example, our system will have 8.5   For purposes of simplification in this example, our system will have 8.
 gigabytes dedicated to `/` as `/dev/raid0a` and the rest allocated to `swap`   gigabytes dedicated to `/` as `/dev/raid0a` and the rest allocated to `swap
 as `/dev/raid0b`.  as `/dev/raid0b`.
   
 *Caution*: This is an unrealistic disk layout for a production server; the   *Caution*: This is an unrealistic disk layout for a production server; th
 NetBSD Guide can expand on proper partitioning technique. See [Installing   NetBSD Guide can expand on proper partitioning technique. See [Installin
 NetBSD: Preliminary considerations and preparations*](inst).  NetBSD: Preliminary considerations and preparations*](inst).
   
 *Note*: Note that 1 GB is 2\*1024\*1024=2097152 blocks (1 block is 512 bytes, or   *Note*: Note that 1 GB is 2\*1024\*1024=2097152 blocks (1 block is 512 bytes, o
 0.5 kilobytes). Despite what the underlying hardware composing a RAID set is,   0.5 kilobytes). Despite what the underlying hardware composing a RAID set is
 the RAID pseudo disk will always have 512 bytes/sector.  the RAID pseudo disk will always have 512 bytes/sector.
   
 *Note*: In our example, the space allocated to the underlying `a:` slice   *Note*: In our example, the space allocated to the underlying `a:` slic
 composing the RAID set differed between i386 and sparc64, therefore the total   composing the RAID set differed between i386 and sparc64, therefore the tota
 sectors of the RAID volumes differs:  sectors of the RAID volumes differs:
   
 On i386:  On i386:
Line 697  Next, format the newly created `/` parti Line 700  Next, format the newly created `/` parti
   
 ### Migrating System to RAID  ### Migrating System to RAID
   
 The new RAID filesystems are now ready for use. We mount them under `/mnt` and   The new RAID filesystems are now ready for use. We mount them under `/mnt` an
 copy all files from the old system. This can be done using   copy all files from the old system. This can be done usin
 [dump(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dump+8+NetBSD-5.0.1+i386) or   [dump(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dump+8+NetBSD-5.0.1+i386) o
 [pax(1)](http://netbsd.gw.com/cgi-bin/man-cgi?pax+1+NetBSD-5.0.1+i386).  [pax(1)](http://netbsd.gw.com/cgi-bin/man-cgi?pax+1+NetBSD-5.0.1+i386).
   
     # mount /dev/raid0a /mnt      # mount /dev/raid0a /mnt
Line 709  copy all files from the old system. This Line 712  copy all files from the old system. This
     # cd /; pax -v -X -rw -pe . /mnt      # cd /; pax -v -X -rw -pe . /mnt
     [...snip...]      [...snip...]
   
 The NetBSD install now exists on the RAID filesystem. We need to fix the   The NetBSD install now exists on the RAID filesystem. We need to fix th
 mount-points in the new copy of `/etc/fstab` or the system will not come up   mount-points in the new copy of `/etc/fstab` or the system will not come u
 correctly. Replace instances of `wd0` with `raid0`.  correctly. Replace instances of `wd0` with `raid0`.
   
 The swap should be unconfigured upon shutdown to avoid parity errors on the RAID   The swap should be unconfigured upon shutdown to avoid parity errors on the RAI
 device. This can be done with a simple, one-line setting in `/etc/rc.conf`.  device. This can be done with a simple, one-line setting in `/etc/rc.conf`.
   
     # vi /mnt/etc/rc.conf      # vi /mnt/etc/rc.conf
     swapoff=YES      swapoff=YES
   
 Next, the boot loader must be installed on Disk1/wd1. Failure to install the   Next, the boot loader must be installed on Disk1/wd1. Failure to install th
 loader on Disk1/wd1 will render the system un-bootable if Disk0/wd0 fails. You  loader on Disk1/wd1 will render the system un-bootable if Disk0/wd0 fails. You
 should hope your system won't have to reboot when wd0 fails.  should hope your system won't have to reboot when wd0 fails.
   
 *Tip*: Because the BIOS/CMOS menus in many i386 based systems are misleading   *Tip*: Because the BIOS/CMOS menus in many i386 based systems are misleadin
 with regard to device boot order. I highly recommend utilizing the `-o   with regard to device boot order. I highly recommend utilizing the `-
 timeout=X` option supported by the i386 1st stage boot loader. Setup unique   timeout=X` option supported by the i386 1st stage boot loader. Setup uniqu
 values for each disk as a point of reference so that you can easily determine   values for each disk as a point of reference so that you can easily determin
 from which disk the system is booting.  from which disk the system is booting.
   
 *Caution*: Although it may seem logical to install the 1st stage boot block into   *Caution*: Although it may seem logical to install the 1st stage boot block int
 `/dev/rwd1{c,d}` (which is historically correct with NetBSD 1.6.x   `/dev/rwd1{c,d}` (which is historically correct with NetBSD 1.6.
 [installboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?installboot+8+NetBSD-5.0.1+i386)   [installboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?installboot+8+NetBSD-5.0.1+i386
 , this is no longer the case. If you make this mistake, the boot sector will   , this is no longer the case. If you make this mistake, the boot sector wil
 become irrecoverably damaged and you will need to start the process over again.  become irrecoverably damaged and you will need to start the process over again.
   
 On i386, install the boot loader into `/dev/rwd1a`:  On i386, install the boot loader into `/dev/rwd1a`:
Line 743  On i386, install the boot loader into `/ Line 746  On i386, install the boot loader into `/
     Ignoring PBR with invalid magic in sector 0 of `/dev/rwd1a'      Ignoring PBR with invalid magic in sector 0 of `/dev/rwd1a'
     Boot options:        timeout 30, flags 0, speed 9600, ioaddr 0, console pc      Boot options:        timeout 30, flags 0, speed 9600, ioaddr 0, console pc
   
 On sparc64, install the boot loader into `/dev/rwd1a` as well, however the `-o`   On sparc64, install the boot loader into `/dev/rwd1a` as well, however the `-o
 flag is unsupported (and un-needed thanks to OpenBoot):  flag is unsupported (and un-needed thanks to OpenBoot):
   
     # /usr/sbin/installboot -v /dev/rwd1a /usr/mdec/bootblk      # /usr/sbin/installboot -v /dev/rwd1a /usr/mdec/bootblk
Line 753  flag is unsupported (and un-needed thank Line 756  flag is unsupported (and un-needed thank
     Bootstrap byte count:   5140      Bootstrap byte count:   5140
     Writing bootstrap      Writing bootstrap
   
 Finally the RAID set must be made auto-configurable and the system should be   Finally the RAID set must be made auto-configurable and the system should b
 rebooted. After the reboot everything is mounted from the RAID devices.  rebooted. After the reboot everything is mounted from the RAID devices.
   
     # raidctl -v -A root raid0      # raidctl -v -A root raid0
Line 772  rebooted. After the reboot everything is Line 775  rebooted. After the reboot everything is
   
 ### Warning  ### Warning
   
 Always use   Always us
 [shutdown(8)](http://netbsd.gw.com/cgi-bin/man-cgi?shutdown+8+NetBSD-5.0.1+i386)   [shutdown(8)](http://netbsd.gw.com/cgi-bin/man-cgi?shutdown+8+NetBSD-5.0.1+i386
 when shutting down. Never simply use   when shutting down. Never simply us
 [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386).   [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386)
 [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386)   [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386
 will not properly run shutdown RC scripts and will not safely disable swap. This   will not properly run shutdown RC scripts and will not safely disable swap. Thi
 will cause dirty parity at every reboot.  will cause dirty parity at every reboot.
   
 ### The first boot with RAID  ### The first boot with RAID
   
 At this point, temporarily configure your system to boot Disk1/wd1. See notes in   At this point, temporarily configure your system to boot Disk1/wd1. See notes i
 [[Testing Boot Blocks|guide/rf#adding-text-boot]] for details on this process.   [[Testing Boot Blocks|guide/rf#adding-text-boot]] for details on this process
 The system should boot now and all filesystems should be on the RAID devices.   The system should boot now and all filesystems should be on the RAID devices
 The RAID will be functional with a single component, however the set is not   The RAID will be functional with a single component, however the set is no
 fully functional because the bogus drive (wd9) has failed.  fully functional because the bogus drive (wd9) has failed.
   
     # egrep -i "raid|root" /var/run/dmesg.boot      # egrep -i "raid|root" /var/run/dmesg.boot
Line 827  fully functional because the bogus drive Line 830  fully functional because the bogus drive
   
 ### Adding Disk0/wd0 to RAID  ### Adding Disk0/wd0 to RAID
   
 We will now add Disk0/wd0 as a component of the RAID. This will destroy the   We will now add Disk0/wd0 as a component of the RAID. This will destroy th
 original file system structure. On i386, the MBR disklabel will be unaffected   original file system structure. On i386, the MBR disklabel will be unaffecte
 (remember we copied wd0's label to wd1 anyway) , therefore there is no need to   (remember we copied wd0's label to wd1 anyway) , therefore there is no need t
 "zero" Disk0/wd0. However, we need to relabel Disk0/wd0 to have an identical   "zero" Disk0/wd0. However, we need to relabel Disk0/wd0 to have an identica
 NetBSD disklabel layout as Disk1/wd1. Then we add Disk0/wd0 as "hot-spare" to   NetBSD disklabel layout as Disk1/wd1. Then we add Disk0/wd0 as "hot-spare" t
 the RAID set and initiate the parity reconstruction for all RAID devices,   the RAID set and initiate the parity reconstruction for all RAID devices
 effectively bringing Disk0/wd0 into the RAID-1 set and "syncing up" both disks.  effectively bringing Disk0/wd0 into the RAID-1 set and "syncing up" both disks.
   
     # disklabel -r wd1 > /tmp/disklabel.wd1      # disklabel -r wd1 > /tmp/disklabel.wd1
     # disklabel -R -r wd0 /tmp/disklabel.wd1      # disklabel -R -r wd0 /tmp/disklabel.wd1
   
 As a last-minute sanity check, you might want to use   As a last-minute sanity check, you might want to us
 [diff(1)](http://netbsd.gw.com/cgi-bin/man-cgi?diff+1+NetBSD-5.0.1+i386) to   [diff(1)](http://netbsd.gw.com/cgi-bin/man-cgi?diff+1+NetBSD-5.0.1+i386) t
 ensure that the disklabels of Disk0/wd0 match Disk1/wd1. You should also backup   ensure that the disklabels of Disk0/wd0 match Disk1/wd1. You should also backu
 these files for reference in the event of an emergency.  these files for reference in the event of an emergency.
   
     # disklabel -r wd0 > /tmp/disklabel.wd0      # disklabel -r wd0 > /tmp/disklabel.wd0
Line 867  Once you are sure, add Disk0/wd0 as a sp Line 870  Once you are sure, add Disk0/wd0 as a sp
     RECON: initiating reconstruction on col 0 -> spare at col 2      RECON: initiating reconstruction on col 0 -> spare at col 2
      11% |****                                   | ETA:    04:26 \       11% |****                                   | ETA:    04:26 \
   
 Depending on the speed of your hardware, the reconstruction time will vary. You   Depending on the speed of your hardware, the reconstruction time will vary. Yo
 may wish to watch it on another terminal (note that you can interrupt  may wish to watch it on another terminal (note that you can interrupt
 `raidctl -S` any time without stopping the synchronisation):  `raidctl -S` any time without stopping the synchronisation):
   
Line 896  After reconstruction, both disks should  Line 899  After reconstruction, both disks should 
          /dev/wd0a: used_spare           /dev/wd0a: used_spare
          [...snip...]           [...snip...]
   
 When the reconstruction is finished we need to install the boot loader on the   When the reconstruction is finished we need to install the boot loader on th
 Disk0/wd0. On i386, install the boot loader into `/dev/rwd0a`:  Disk0/wd0. On i386, install the boot loader into `/dev/rwd0a`:
   
     # /usr/sbin/installboot -o timeout=15 -v /dev/rwd0a /usr/mdec/bootxx_ffsv1      # /usr/sbin/installboot -o timeout=15 -v /dev/rwd0a /usr/mdec/bootxx_ffsv1
Line 913  On sparc64: Line 916  On sparc64:
     Bootstrap byte count:   5140      Bootstrap byte count:   5140
     Writing bootstrap      Writing bootstrap
   
 And finally, reboot the machine one last time before proceeding. This is   And finally, reboot the machine one last time before proceeding. This i
 required to migrate Disk0/wd0 from status "used\_spare" as "Component0" to state   required to migrate Disk0/wd0 from status "used\_spare" as "Component0" to stat
 "optimal". Refer to notes in the next section regarding verification of clean   "optimal". Refer to notes in the next section regarding verification of clea
 parity after each reboot.  parity after each reboot.
   
     # shutdown -r now      # shutdown -r now
   
 ### Testing Boot Blocks  ### Testing Boot Blocks
   
 At this point, you need to ensure that your system's hardware can properly boot   At this point, you need to ensure that your system's hardware can properly boo
 using the boot blocks on either disk. On i386, this is a hardware-dependent   using the boot blocks on either disk. On i386, this is a hardware-dependen
 process that may be done via your motherboard CMOS/BIOS menu or your controller   process that may be done via your motherboard CMOS/BIOS menu or your controlle
 card's configuration menu.  card's configuration menu.
   
 On i386, use the menu system on your machine to set the boot device order /   On i386, use the menu system on your machine to set the boot device order 
 priority to Disk1/wd1 before Disk0/wd0. The examples here depict a generic Award   priority to Disk1/wd1 before Disk0/wd0. The examples here depict a generic Awar
 BIOS.  BIOS.
   
 ![Award BIOS i386 Boot Disk1/wd1](/guide/images/raidframe_awardbios2.png)  ![Award BIOS i386 Boot Disk1/wd1](/guide/images/raidframe_awardbios2.png)
   
 **Award BIOS i386 Boot Disk1/wd1**  **Award BIOS i386 Boot Disk1/wd1**
   
 Save changes and exit:  Save changes and exit:
Line 942  Save changes and exit: Line 946  Save changes and exit:
     Press return to boot now, any other key for boot menu      Press return to boot now, any other key for boot menu
     booting hd0a:netbsd - starting in 30      booting hd0a:netbsd - starting in 30
   
 You can determine that the BIOS is reading Disk1/wd1 because the timeout of the   You can determine that the BIOS is reading Disk1/wd1 because the timeout of th
 boot loader is 30 seconds instead of 15. After the reboot, re-enter the BIOS and   boot loader is 30 seconds instead of 15. After the reboot, re-enter the BIOS an
 configure the drive boot order back to the default:  configure the drive boot order back to the default:
   
 ![Award BIOS i386 Boot Disk0/wd0](/guide/images/raidframe_awardbios1.png)  ![Award BIOS i386 Boot Disk0/wd0](/guide/images/raidframe_awardbios1.png)
   
 **Award BIOS i386 Boot Disk0/wd0**  **Award BIOS i386 Boot Disk0/wd0**
   
 Save changes and exit:  Save changes and exit:
Line 956  Save changes and exit: Line 961  Save changes and exit:
     Press return to boot now, any other key for boot menu      Press return to boot now, any other key for boot menu
     booting hd0a:netbsd - starting in 15      booting hd0a:netbsd - starting in 15
   
 Notice how your custom kernel detects controller/bus/drive assignments   Notice how your custom kernel detects controller/bus/drive assignment
 independent of what the BIOS assigns as the boot disk. This is the expected   independent of what the BIOS assigns as the boot disk. This is the expecte
 behavior.  behavior.
   
 On sparc64, use the Sun OpenBoot **devalias** to confirm that both disks are bootable:  On sparc64, use the Sun OpenBoot **devalias** to confirm that both disks are bootable:
Line 1010  And the second disk: Line 1015  And the second disk:
          The Regents of the University of California.  All rights reserved.           The Regents of the University of California.  All rights reserved.
     [...snip...]      [...snip...]
   
 At each boot, the following should appear in the NetBSD kernel   At each boot, the following should appear in the NetBSD kerne
 [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386) :  [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386) :
   
     Kernelized RAIDframe activated      Kernelized RAIDframe activated
Line 1021  At each boot, the following should appea Line 1026  At each boot, the following should appea
     root on raid0a dumps on raid0b      root on raid0a dumps on raid0b
     root file system type: ffs      root file system type: ffs
   
 Once you are certain that both disks are bootable, verify the RAID parity is   Once you are certain that both disks are bootable, verify the RAID parity i
 clean after each reboot:  clean after each reboot:
   
     # raidctl -v -s raid0      # raidctl -v -s raid0

Removed from v.1.4  
changed lines
  Added in v.1.5


CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb