Annotation of wikisrc/guide/raidframe.mdwn, revision 1.8

1.1       jdf         1: # NetBSD RAIDframe
                      2: 
                      3: ## RAIDframe Introduction
                      4: 
                      5: ### About RAIDframe
                      6: 
1.6       jdf         7: NetBSD uses the [CMU RAIDframe](http://www.pdl.cmu.edu/RAIDframe/) software for
                      8: its RAID subsystem. NetBSD is the primary platform for RAIDframe development.
                      9: RAIDframe can also be found in older versions of FreeBSD and OpenBSD. NetBSD
                     10: also has another way of bundling disks, the
                     11: [ccd(4)](http://netbsd.gw.com/cgi-bin/man-cgi?ccd+4+NetBSD-5.0.1+i386) subsystem
                     12: (see [Concatenated Disk Device](/guide/ccd)). You should possess some [basic
                     13: knowledge](http://www.acnc.com/04_00.html) about RAID concepts and terminology
                     14: before continuing. You should also be at least familiar with the different
                     15: levels of RAID - Adaptec provides an [excellent
                     16: reference](http://www.adaptec.com/en-US/_common/compatibility/_education/RAID_level_compar_wp.htm),
                     17: and the [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386)
1.1       jdf        18: manpage contains a short overview too.
                     19: 
                     20: ### A warning about Data Integrity, Backups, and High Availability
                     21: 
1.6       jdf        22: RAIDframe is a Software RAID implementation, as opposed to Hardware RAID. As
                     23: such, it does not need special disk controllers supported by NetBSD. System
                     24: administrators should give a great deal of consideration to whether software
                     25: RAID or hardware RAID is more appropriate for their "Mission Critical"
                     26: applications. For some projects you might consider the use of many of the
                     27: hardware RAID devices [supported by
                     28: NetBSD](http://www.NetBSD.org/support/hardware/). It is truly at your discretion
                     29: what type of RAID you use, but it is recommend that you consider factors such
1.1       jdf        30: as: manageability, commercial vendor support, load-balancing and failover, etc.
                     31: 
1.6       jdf        32: Depending on the RAID level used, RAIDframe does provide redundancy in the event
                     33: of a hardware failure. However, it is *not* a replacement for reliable backups!
                     34: Software and user-error can still cause data loss. RAIDframe may be used as a
                     35: mechanism for facilitating backups in systems without backup hardware, but this
                     36: is not an ideal configuration. Finally, with regard to "high availability", RAID
1.1       jdf        37: is only a very small component to ensuring data availability.
                     38: 
                     39: Once more for good measure: *Back up your data!*
                     40: 
                     41: ### Hardware versus Software RAID
                     42: 
1.6       jdf        43: If you run a server, it will most probably already have a Hardware RAID
                     44: controller. There are reasons for and against using a Software RAID, depending
1.1       jdf        45: on the scenario.
                     46: 
1.6       jdf        47: In general, a Software RAID is well suited for low-IO system disks. If you run a
                     48: Software RAID, you can exchange disks and disk controllers, or even move the
                     49: disks to a completely different machine. The computational overhead for the RAID
1.1       jdf        50: is negligible if there is only few disk IO operations.
                     51: 
1.6       jdf        52: If you need much IO, you should use a Hardware RAID. With a Software RAID, the
                     53: redundancy data has to be transferred via the bus your disk controller is
                     54: connected to. With a Hardware RAID, you transfer data only once - the redundancy
1.1       jdf        55: computation and transfer is done by the controller.
                     56: 
                     57: ### Getting Help
                     58: 
1.6       jdf        59: If you encounter problems using RAIDframe, you have several options for
1.1       jdf        60: obtaining help.
                     61: 
1.6       jdf        62:  1. Read the RAIDframe man pages:
                     63:     [raid(4)](http://netbsd.gw.com/cgi-bin/man-cgi?raid+4+NetBSD-5.0.1+i386) and
                     64:     [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386)
1.1       jdf        65:     thoroughly.
                     66: 
1.6       jdf        67:  2. Search the mailing list archives. Unfortunately, there is no NetBSD list
1.1       jdf        68:     dedicated to RAIDframe support. Depending on the nature of the problem, posts
                     69:     tend to end up in a variety of lists. At a very minimum, search
                     70:     [netbsd-help](http://mail-index.NetBSD.org/netbsd-help/),
                     71:     [netbsd-users@NetBSD.org](http://mail-index.NetBSD.org/netbsd-users/),
                     72:     [current-users@NetBSD.org](http://mail-index.NetBSD.org/current-users/). Also
                     73:     search the list for the NetBSD platform on which you are using RAIDframe:
                     74:     port-*`${ARCH}`*@NetBSD.org.
                     75: 
1.7       jdf        76:     *Caution*: Because RAIDframe is constantly undergoing development, some information in
1.1       jdf        77:        mailing list archives has the potential of being dated and inaccurate.
                     78: 
1.6       jdf        79:  3. Search the [Problem Report
1.1       jdf        80:     database](http://www.NetBSD.org/support/send-pr.html).
                     81: 
1.6       jdf        82:  4. If your problem persists: Post to the mailing list most appropriate
                     83:     (judgment call). Collect as much verbosely detailed information as possible
                     84:     before posting: Include your
                     85:     [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386)
                     86:     output from `/var/run/dmesg.boot`, your kernel
                     87:     [config(5)](http://netbsd.gw.com/cgi-bin/man-cgi?config+5+NetBSD-5.0.1+i386) ,
                     88:     your `/etc/raid[0-9].conf`, any relevant errors on `/dev/console`,
                     89:     `/var/log/messages`, or to `stdout/stderr` of
                     90:     [raidctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?raidctl+8+NetBSD-5.0.1+i386).
                     91:     The output of **raidctl -s** (if available) will be useful as well. Also
                     92:     include details on the troubleshooting steps you've taken thus far, exactly
                     93:     when the problem started, and any notes on recent changes that may have
                     94:     prompted the problem to develop. Remember to be patient when waiting for a
1.1       jdf        95:     response.
                     96: 
                     97: ## Setup RAIDframe Support
                     98: 
                     99: The use of RAID will require software and hardware configuration changes.
                    100: 
                    101: ### Kernel Support
                    102: 
1.6       jdf       103: The GENERIC kernel already has support for RAIDframe. If you have built a custom
                    104: kernel for your environment the kernel configuration must have the following
1.1       jdf       105: options:
                    106: 
                    107:     pseudo-device   raid            8       # RAIDframe disk driver
                    108:     options         RAID_AUTOCONFIG         # auto-configuration of RAID components
                    109: 
1.6       jdf       110: The RAID support must be detected by the NetBSD kernel, which can be checked by
                    111: looking at the output of the
                    112: [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386)
1.1       jdf       113: command.
                    114: 
                    115:     # dmesg|grep -i raid
                    116:     Kernelized RAIDframe activated
                    117: 
1.6       jdf       118: Historically, the kernel must also contain static mappings between bus addresses
                    119: and device nodes in `/dev`. This used to ensure consistency of devices within
                    120: RAID sets in the event of a device failure after reboot. Since NetBSD 1.6,
                    121: however, using the auto-configuration features of RAIDframe has been recommended
                    122: over statically mapping devices. The auto-configuration features allow drives to
                    123: move around on the system, and RAIDframe will automatically determine which
1.1       jdf       124: components belong to which RAID sets.
                    125: 
                    126: ### Power Redundancy and Disk Caching
                    127: 
1.6       jdf       128: If your system has an Uninterruptible Power Supply (UPS), if your system has
                    129: redundant power supplies, or your disk controller has a battery, you should
                    130: consider enabling the read and write caches on your drives. On systems with
                    131: redundant power, this will improve drive performance. On systems without
                    132: redundant power, the write cache could endanger the integrity of RAID data in
1.1       jdf       133: the event of a power loss.
                    134: 
1.6       jdf       135: The [dkctl(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dkctl+8+NetBSD-5.0.1+i386)
                    136: utility to can be used for this on all kinds of disks that support the operation
1.1       jdf       137: (SCSI, EIDE, SATA, ...):
                    138: 
                    139:     # dkctl wd0 getcache
                    140:     /dev/rwd0d: read cache enabled
                    141:     /dev/rwd0d: read cache enable is not changeable
                    142:     /dev/rwd0d: write cache enable is changeable
                    143:     /dev/rwd0d: cache parameters are not savable
                    144:     # dkctl wd0 setcache rw
                    145:     # dkctl wd0 getcache
                    146:     /dev/rwd0d: read cache enabled
                    147:     /dev/rwd0d: write-back cache enabled
                    148:     /dev/rwd0d: read cache enable is not changeable
                    149:     /dev/rwd0d: write cache enable is changeable
                    150:     /dev/rwd0d: cache parameters are not savable
                    151: 
                    152: ## Example: RAID-1 Root Disk
                    153: 
1.6       jdf       154: This example explains how to setup RAID-1 root disk. With RAID-1 components are
                    155: mirrored and therefore the server can be fully functional in the event of a
                    156: single component failure. The goal is to provide a level of redundancy that will
                    157: allow the system to encounter a component failure on either component disk in
1.1       jdf       158: the RAID and:
                    159: 
                    160:  * Continue normal operations until a maintenance window can be scheduled.
1.6       jdf       161:  * Or, in the unlikely event that the component failure causes a system reboot,
1.1       jdf       162:    be able to quickly reconfigure the system to boot from the remaining
                    163:    component (platform dependent).
                    164: 
1.7       jdf       165: ![RAID-1 Disk Logical Layout](/guide/images/raidframe_raidl1-diskdia.png)
1.8     ! jdf       166: 
1.1       jdf       167: **RAID-1 Disk Logical Layout**
                    168: 
1.6       jdf       169: Because RAID-1 provides both redundancy and performance improvements, its most
                    170: practical application is on critical "system" partitions such as `/`, `/usr`,
                    171: `/var`, `swap`, etc., where read operations are more frequent than write
                    172: operations. For other file systems, such as `/home` or `/var/`, other RAID
                    173: levels might be considered (see the references above). If one were simply
                    174: creating a generic RAID-1 volume for a non-root file system, the cookie-cutter
                    175: examples from the man page could be followed, but because the root volume must
1.1       jdf       176: be bootable, certain special steps must be taken during initial setup.
                    177: 
1.6       jdf       178: *Note*: This example will outline a process that differs only slightly between
                    179: the i386 and sparc64 platforms. In an attempt to reduce excessive duplication of
                    180: content, where differences do exist and are cosmetic in nature, they will be
                    181: pointed out using a section such as this. If the process is drastically
1.1       jdf       182: different, the process will branch into separate, platform dependent steps.
                    183: 
                    184: ### Pseudo-Process Outline
                    185: 
1.6       jdf       186: Although a much more refined process could be developed using a custom copy of
                    187: NetBSD installed on custom-developed removable media, presently the NetBSD
1.1       jdf       188: install media lacks RAIDframe tools and support, so the following pseudo process
                    189: has become the de facto standard for setting up RAID-1 Root.
                    190: 
                    191:  1. Install a stock NetBSD onto Disk0 of your system.
                    192: 
1.6       jdf       193: 
1.4       jdf       194:     ![Perform generic install onto Disk0/wd0](/guide/images/raidframe_r1r-pp1.png)
1.8     ! jdf       195: 
1.1       jdf       196:     **Perform generic install onto Disk0/wd0**
                    197: 
1.6       jdf       198:  2. Use the installed system on Disk0/wd0 to setup a RAID Set composed of
1.1       jdf       199:     Disk1/wd1 only.
                    200: 
1.7       jdf       201:     ![Setup RAID Set](/guide/images/raidframe_r1r-pp2.png)
1.8     ! jdf       202: 
1.1       jdf       203:     **Setup RAID Set**
                    204: 
                    205:  3. Reboot the system off the Disk1/wd1 with the newly created RAID volume.
                    206: 
1.6       jdf       207: 
1.4       jdf       208:     ![Reboot using Disk1/wd1 of RAID](/guide/images/raidframe_r1r-pp3.png)
1.8     ! jdf       209: 
1.6       jdf       210:     **Reboot using Disk1/wd1 of RAID**
1.5       jdf       211: 
1.1       jdf       212: 
1.7       jdf       213:  4. Add/re-sync Disk0/wd0 back into the RAID set.
1.1       jdf       214: 
1.4       jdf       215:     ![Mirror Disk1/wd1 back to Disk0/wd0](/guide/images/raidframe_r1r-pp4.png)
1.8     ! jdf       216: 
1.1       jdf       217:     **Mirror Disk1/wd1 back to Disk0/wd0**
                    218: 
                    219: ### Hardware Review
                    220: 
1.6       jdf       221: At present, the alpha, amd64, i386, pmax, sparc, sparc64, and vax NetBSD
                    222: platforms support booting from RAID-1. Booting is not supported from any other
                    223: RAID level. Booting from a RAID set is accomplished by teaching the 1st stage
                    224: boot loader to understand both 4.2BSD/FFS and RAID partitions. The 1st boot
                    225: block code only needs to know enough about the disk partitions and file systems
                    226: to be able to read the 2nd stage boot blocks. Therefore, at any time, the
1.7       jdf       227: system's BIOS/firmware must be able to read a drive with 1st stage boot blocks
1.6       jdf       228: installed. On the i386 platform, configuring this is entirely dependent on the
1.7       jdf       229: vendor of the controller card/host bus adapter to which your disks are
1.1       jdf       230: connected. On sparc64 this is controlled by the IEEE 1275 Sun OpenBoot Firmware.
                    231: 
1.6       jdf       232: This article assumes two identical IDE disks (`/dev/wd{0,1}`) which we are going
1.1       jdf       233: to mirror (RAID-1). These disks are identified as:
                    234: 
                    235:     # grep ^wd /var/run/dmesg.boot
                    236:     wd0 at atabus0 drive 0: <WDC WD100BB-75CLB0>
                    237:     wd0: drive supports 16-sector PIO transfers, LBA addressing
                    238:     wd0: 9541 MB, 19386 cyl, 16 head, 63 sec, 512 bytes/sect x 19541088 sectors
                    239:     wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
                    240:     wd0(piixide0:0:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA data transfers)
                    241:     
                    242:     wd1 at atabus1 drive 0: <WDC WD100BB-75CLB0>
                    243:     wd1: drive supports 16-sector PIO transfers, LBA addressing
                    244:     wd1: 9541 MB, 19386 cyl, 16 head, 63 sec, 512 bytes/sect x 19541088 sectors
                    245:     wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
                    246:     wd1(piixide0:1:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA data transfers)
                    247: 
1.6       jdf       248: *Note*: If you are using SCSI, replace `/dev/{,r}wd{0,1}` with
                    249: `/dev/{,r}sd{0,1}`.
1.1       jdf       250: 
1.6       jdf       251: In this example, both disks are jumpered as Master on separate channels on the
                    252: same controller. You usually wouldn't want to have both disks on the same bus on
                    253: the same controller; this creates a single point of failure. Ideally you would
                    254: have the disks on separate channels on separate controllers. Nonetheless, in
                    255: most cases the most critical point is the hard disk, so having redundant
                    256: channels or controllers is not that important. Plus, having more channels or
                    257: controllers increases costs. Some SCSI controllers have multiple channels on the
                    258: same controller, however, a SCSI bus reset on one channel could adversely affect
                    259: the other channel if the ASIC/IC becomes overloaded. The trade-off with two
                    260: controllers is that twice the bandwidth is used on the system bus. For purposes
                    261: of simplification, this example shows two disks on different channels on the
1.1       jdf       262: same controller.
                    263: 
1.6       jdf       264: *Note*: RAIDframe requires that all components be of the same size. Actually, it
                    265: will use the lowest common denominator among components of dissimilar sizes. For
                    266: purposes of illustration, the example uses two disks of identical geometries.
                    267: Also, consider the availability of replacement disks if a component suffers a
1.1       jdf       268: critical hardware failure.
                    269: 
1.6       jdf       270: *Tip*: Two disks of identical vendor model numbers could have different
                    271: geometries if the drive possesses "grown defects". Use a low-level program to
                    272: examine the grown defects table of the disk. These disks are obviously
1.1       jdf       273: suboptimal candidates for use in RAID and should be avoided.
                    274: 
                    275: ### Initial Install on Disk0/wd0
                    276: 
1.6       jdf       277: Perform a very generic installation onto your Disk0/wd0. Follow the `INSTALL`
                    278: instructions for your platform. Install all the sets but do not bother
1.1       jdf       279: customizing anything other than the kernel as it will be overwritten.
                    280: 
1.6       jdf       281: *Tip*: On i386, during the sysinst install, when prompted if you want to `use
1.1       jdf       282: the entire disk for NetBSD`, answer `yes`.
                    283: 
1.3       jdf       284:  * [Installing NetBSD: Preliminary considerations and preparations](/guide/inst)
1.6       jdf       285:  * [NetBSD/i386 Install](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/i386/INSTALL.html)
                    286:  * [NetBSD/sparc64 Install](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/sparc64/INSTALL.html)
1.1       jdf       287: 
1.6       jdf       288: Once the installation is complete, you should examine the
                    289: [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)
                    290: and [fdisk(8)](http://netbsd.gw.com/cgi-bin/man-cgi?fdisk+8+NetBSD-5.0.1+i386) /
                    291: [sunlabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?sunlabel+8+NetBSD-5.0.1+i386)
1.1       jdf       292: outputs on the system:
                    293: 
                    294:     # df
                    295:     Filesystem   1K-blocks        Used       Avail %Cap Mounted on
                    296:     /dev/wd0a       9487886      502132     8511360   5% /
                    297: 
                    298: On i386:
                    299: 
                    300:     # disklabel -r wd0
                    301:     type: unknown
                    302:     disk: Disk00
                    303:     label:
                    304:     flags:
                    305:     bytes/sector: 512
                    306:     sectors/track: 63
                    307:     tracks/cylinder: 16
                    308:     sectors/cylinder: 1008
                    309:     cylinders: 19386
                    310:     total sectors: 19541088
                    311:     rpm: 3600
                    312:     interleave: 1
                    313:     trackskew: 0
                    314:     cylinderskew: 0
                    315:     headswitch: 0           # microseconds
                    316:     track-to-track seek: 0  # microseconds
                    317:     drivedata: 0
                    318:     
                    319:     16 partitions:
                    320:     #        size    offset     fstype [fsize bsize cpg/sgs]
                    321:      a:  19276992        63     4.2BSD   1024  8192 46568  # (Cyl.      0* - 19124*)
                    322:      b:    264033  19277055       swap                     # (Cyl.  19124* - 19385)
                    323:      c:  19541025        63     unused      0     0        # (Cyl.      0* - 19385)
                    324:      d:  19541088         0     unused      0     0        # (Cyl.      0 - 19385)
                    325:     
                    326:     # fdisk /dev/rwd0d
                    327:     Disk: /dev/rwd0d
                    328:     NetBSD disklabel disk geometry:
                    329:     cylinders: 19386, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
                    330:     total sectors: 19541088
                    331:     
                    332:     BIOS disk geometry:
                    333:     cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
                    334:     total sectors: 19541088
                    335:     
                    336:     Partition table:
                    337:     0: NetBSD (sysid 169)
                    338:         start 63, size 19541025 (9542 MB, Cyls 0-1216/96/1), Active
                    339:     1: <UNUSED>
                    340:     2: <UNUSED>
                    341:     3: <UNUSED>
                    342:     Bootselector disabled.
                    343:     First active partition: 0
                    344: 
                    345: On Sparc64 the command and output differ slightly:
                    346: 
                    347:     # disklabel -r wd0
                    348:     type: unknown
                    349:     disk: Disk0
                    350:     [...snip...]
                    351:     8 partitions:
                    352:     #        size    offset     fstype [fsize bsize cpg/sgs]
                    353:      a:  19278000         0     4.2BSD   1024  8192 46568  # (Cyl.      0 -  19124)
                    354:      b:    263088  19278000       swap                     # (Cyl.  19125 -  19385)
                    355:      c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)
                    356:     
                    357:     # sunlabel /dev/rwd0c
                    358:     sunlabel> P
                    359:     a: start cyl =      0, size = 19278000 (19125/0/0 - 9413.09Mb)
                    360:     b: start cyl =  19125, size =   263088 (261/0/0 - 128.461Mb)
                    361:     c: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
                    362: 
                    363: ### Preparing Disk1/wd1
                    364: 
1.6       jdf       365: Once you have a stock install of NetBSD on Disk0/wd0, you are ready to begin.
                    366: Disk1/wd1 will be visible and unused by the system. To setup Disk1/wd1, you will
                    367: use
                    368: [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)
1.1       jdf       369: to allocate the entire second disk to the RAID-1 set.
                    370: 
1.6       jdf       371: *Tip*: The best way to ensure that Disk1/wd1 is completely empty is to 'zero'
                    372: out the first few sectors of the disk with
                    373: [dd(1)](http://netbsd.gw.com/cgi-bin/man-cgi?dd+1+NetBSD-5.0.1+i386) . This will
                    374: erase the MBR (i386) or Sun disk label (sparc64), as well as the NetBSD disk
                    375: label. If you make a mistake at any point during the RAID setup process, you can
1.1       jdf       376: always refer to this process to restore the disk to an empty state.
                    377: 
                    378: *Note*: On sparc64, use `/dev/rwd1c` instead of `/dev/rwd1d`!
                    379: 
                    380:     # dd if=/dev/zero of=/dev/rwd1d bs=8k count=1
                    381:     1+0 records in
                    382:     1+0 records out
                    383:     8192 bytes transferred in 0.003 secs (2730666 bytes/sec)
                    384: 
1.6       jdf       385: Once this is complete, on i386, verify that both the MBR and NetBSD disk labels
1.1       jdf       386: are gone. On sparc64, verify that the Sun Disk label is gone as well.
                    387: 
                    388: On i386:
                    389: 
                    390:     # fdisk /dev/rwd1d
                    391:     
                    392:     fdisk: primary partition table invalid, no magic in sector 0
                    393:     Disk: /dev/rwd1d
                    394:     NetBSD disklabel disk geometry:
                    395:     cylinders: 19386, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
                    396:     total sectors: 19541088
                    397:     
                    398:     BIOS disk geometry:
                    399:     cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
                    400:     total sectors: 19541088
                    401:     
                    402:     Partition table:
                    403:     0: <UNUSED>
                    404:     1: <UNUSED>
                    405:     2: <UNUSED>
                    406:     3: <UNUSED>
                    407:     Bootselector disabled.
                    408:     
                    409:     # disklabel -r wd1
                    410:     
                    411:     [...snip...]
                    412:     16 partitions:
                    413:     #        size    offset     fstype [fsize bsize cpg/sgs]
                    414:      c:  19541025        63     unused      0     0        # (Cyl.      0* - 19385)
                    415:      d:  19541088         0     unused      0     0        # (Cyl.      0 - 19385)
                    416: 
                    417: On sparc64:
                    418: 
                    419:     # sunlabel /dev/rwd1c
                    420:     
                    421:     sunlabel: bogus label on `/dev/wd1c' (bad magic number)
                    422:     
                    423:     # disklabel -r wd1
                    424:     
                    425:     [...snip...]
                    426:     3 partitions:
                    427:     #        size    offset     fstype [fsize bsize cpg/sgs]
                    428:      c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)
                    429:     disklabel: boot block size 0
                    430:     disklabel: super block size 0
                    431: 
1.6       jdf       432: Now that you are certain the second disk is empty, on i386 you must establish
                    433: the MBR on the second disk using the values obtained from Disk0/wd0 above. We
                    434: must remember to mark the NetBSD partition active or the system will not boot.
                    435: You must also create a NetBSD disklabel on Disk1/wd1 that will enable a RAID
                    436: volume to exist upon it. On sparc64, you will need to simply
                    437: [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)
1.1       jdf       438: the second disk which will write the proper Sun Disk Label.
                    439: 
1.6       jdf       440: *Tip*:
                    441: [disklabel(8)](http://netbsd.gw.com/cgi-bin/man-cgi?disklabel+8+NetBSD-5.0.1+i386)
                    442: will use your shell' s environment variable `$EDITOR` variable to edit the
                    443: disklabel. The default is
1.1       jdf       444: [vi(1)](http://netbsd.gw.com/cgi-bin/man-cgi?vi+1+NetBSD-5.0.1+i386)
                    445: 
                    446: On i386:
                    447: 
                    448:     # fdisk -0ua /dev/rwd1d
                    449:     fdisk: primary partition table invalid, no magic in sector 0
                    450:     Disk: /dev/rwd1d
                    451:     NetBSD disklabel disk geometry:
                    452:     cylinders: 19386, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
                    453:     total sectors: 19541088
                    454:     
                    455:     BIOS disk geometry:
                    456:     cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
                    457:     total sectors: 19541088
                    458:     
                    459:     Do you want to change our idea of what BIOS thinks? [n]
                    460:     
                    461:     Partition 0:
                    462:     <UNUSED>
                    463:     The data for partition 0 is:
                    464:     <UNUSED>
                    465:     sysid: [0..255 default: 169]
                    466:     start: [0..1216cyl default: 63, 0cyl, 0MB]
                    467:     size: [0..1216cyl default: 19541025, 1216cyl, 9542MB]
                    468:     bootmenu: []
                    469:     Do you want to change the active partition? [n] y
                    470:     Choosing 4 will make no partition active.
                    471:     active partition: [0..4 default: 0] 0
                    472:     Are you happy with this choice? [n] y
                    473:     
                    474:     We haven't written the MBR back to disk yet.  This is your last chance.
                    475:     Partition table:
                    476:     0: NetBSD (sysid 169)
                    477:         start 63, size 19541025 (9542 MB, Cyls 0-1216/96/1), Active
                    478:     1: <UNUSED>
                    479:     2: <UNUSED>
                    480:     3: <UNUSED>
                    481:     Bootselector disabled.
                    482:     Should we write new partition table? [n] y
                    483:     
                    484:     # disklabel -r -e -I wd1
                    485:     type: unknown
                    486:     disk: Disk1
                    487:     label:
                    488:     flags:
                    489:     bytes/sector: 512
                    490:     sectors/track: 63
                    491:     tracks/cylinder: 16
                    492:     sectors/cylinder: 1008
                    493:     cylinders: 19386
                    494:     total sectors: 19541088
                    495:     [...snip...]
                    496:     16 partitions:
                    497:     #        size    offset     fstype [fsize bsize cpg/sgs]
                    498:      a:  19541025        63       RAID                     # (Cyl.      0*-19385)
                    499:      c:  19541025        63     unused      0     0        # (Cyl.      0*-19385)
                    500:      d:  19541088         0     unused      0     0        # (Cyl.      0 -19385)
                    501: 
                    502: On sparc64:
                    503: 
                    504:     # disklabel -r -e -I wd1
                    505:     type: unknown
                    506:     disk: Disk1
                    507:     label:
                    508:     flags:
                    509:     bytes/sector: 512
                    510:     sectors/track: 63
                    511:     tracks/cylinder: 16
                    512:     sectors/cylinder: 1008
                    513:     cylinders: 19386
                    514:     total sectors: 19541088
                    515:     [...snip...]
                    516:     3 partitions:
                    517:     #        size    offset     fstype [fsize bsize cpg/sgs]
                    518:      a:  19541088         0       RAID                     # (Cyl.      0 -  19385)
                    519:      c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)
                    520:     
1.6       jdf       521:     # sunlabel /dev/rwd1c
1.1       jdf       522:     sunlabel> P
                    523:     a: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
                    524:     c: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
                    525: 
1.6       jdf       526: *Note*: On i386, the `c:` and `d:` slices are reserved. `c:` represents the
                    527: NetBSD portion of the disk. `d:` represents the entire disk. Because we want to
                    528: allocate the entire NetBSD MBR partition to RAID, and because `a:` resides
                    529: within the bounds of `c:`, the `a:` and `c:` slices have same size and offset
                    530: values and sizes. The offset must start at a track boundary (an increment of
                    531: sectors matching the sectors/track value in the disk label). On sparc64 however,
                    532: `c:` represents the entire NetBSD partition in the Sun disk label and `d:` is
                    533: not reserved. Also note that sparc64's `c:` and `a:` require no offset from the
                    534: beginning of the disk, however if they should need to be, the offset must start
                    535: at a cylinder boundary (an increment of sectors matching the sectors/cylinder
1.1       jdf       536: value).
                    537: 
                    538: ### Initializing the RAID Device
                    539: 
1.7       jdf       540: Next we create the configuration file for the RAID set/volume. Traditionally,
1.6       jdf       541: RAIDframe configuration files belong in `/etc` and would be read and initialized
                    542: at boot time, however, because we are creating a bootable RAID volume, the
                    543: configuration data will actually be written into the RAID volume using the
                    544: *auto-configure* feature. Therefore, files are needed only during the initial
1.1       jdf       545: setup and should not reside in `/etc`.
                    546: 
                    547:     # vi /var/tmp/raid0.conf
                    548:     START array
                    549:     1 2 0
                    550:     
                    551:     START disks
                    552:     absent
                    553:     /dev/wd1a
                    554:     
                    555:     START layout
                    556:     128 1 1 1
                    557:     
                    558:     START queue
                    559:     fifo 100
                    560: 
1.6       jdf       561: Note that `absent` means a non-existing disk. This will allow us to establish
                    562: the RAID volume with a bogus component that we will substitute for Disk0/wd0 at
1.1       jdf       563: a later time.
                    564: 
1.6       jdf       565: Next we configure the RAID device and initialize the serial number to something
                    566: unique. In this example we use a "YYYYMMDD*`Revision`*" scheme. The format you
                    567: choose is entirely at your discretion, however the scheme you choose should
1.1       jdf       568: ensure that no two RAID sets use the same serial number at the same time.
                    569: 
1.6       jdf       570: After that we initialize the RAID set for the first time, safely ignoring the
1.1       jdf       571: errors regarding the bogus component.
                    572: 
                    573:     # raidctl -v -C /var/tmp/raid0.conf raid0
                    574:     Ignoring missing component at column 0
                    575:     raid0: Component absent being configured at col: 0
                    576:              Column: 0 Num Columns: 0
                    577:              Version: 0 Serial Number: 0 Mod Counter: 0
                    578:              Clean: No Status: 0
                    579:     Number of columns do not match for: absent
                    580:     absent is not clean!
                    581:     raid0: Component /dev/wd1a being configured at col: 1
                    582:              Column: 0 Num Columns: 0
                    583:              Version: 0 Serial Number: 0 Mod Counter: 0
                    584:              Clean: No Status: 0
                    585:     Column out of alignment for: /dev/wd1a
                    586:     Number of columns do not match for: /dev/wd1a
                    587:     /dev/wd1a is not clean!
                    588:     raid0: There were fatal errors
                    589:     raid0: Fatal errors being ignored.
                    590:     raid0: RAID Level 1
                    591:     raid0: Components: component0[**FAILED**] /dev/wd1a
                    592:     raid0: Total Sectors: 19540864 (9541 MB)
                    593:     # raidctl -v -I 2009122601 raid0
                    594:     # raidctl -v -i raid0
                    595:     Initiating re-write of parity
                    596:     raid0: Error re-writing parity!
                    597:     Parity Re-write status:
                    598:     
                    599:     # tail -1 /var/log/messages
                    600:     Dec 26 00:00:30  /netbsd: raid0: Error re-writing parity!
                    601:     # raidctl -v -s raid0
                    602:     Components:
                    603:               component0: failed
                    604:                /dev/wd1a: optimal
                    605:     No spares.
                    606:     component0 status is: failed.  Skipping label.
                    607:     Component label for /dev/wd1a:
                    608:        Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
                    609:        Version: 2, Serial Number: 2009122601, Mod Counter: 7
                    610:        Clean: No, Status: 0
                    611:        sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
                    612:        Queue size: 100, blocksize: 512, numBlocks: 19540864
                    613:        RAID Level: 1
                    614:        Autoconfig: No
                    615:        Root partition: No
                    616:        Last configured as: raid0
                    617:     Parity status: DIRTY
                    618:     Reconstruction is 100% complete.
                    619:     Parity Re-write is 100% complete.
                    620:     Copyback is 100% complete.
                    621: 
                    622: ### Setting up Filesystems
                    623: 
1.6       jdf       624: *Caution*: The root filesystem must begin at sector 0 of the RAID device. If
1.1       jdf       625: not, the primary boot loader will be unable to find the secondary boot loader.
                    626: 
1.6       jdf       627: The RAID device is now configured and available. The RAID device is a pseudo
                    628: disk-device. It will be created with a default disk label. You must now
                    629: determine the proper sizes for disklabel slices for your production environment.
                    630: For purposes of simplification in this example, our system will have 8.5
                    631: gigabytes dedicated to `/` as `/dev/raid0a` and the rest allocated to `swap`
1.1       jdf       632: as `/dev/raid0b`.
                    633: 
1.6       jdf       634: *Caution*: This is an unrealistic disk layout for a production server; the
                    635: NetBSD Guide can expand on proper partitioning technique. See [Installing
1.1       jdf       636: NetBSD: Preliminary considerations and preparations*](inst).
                    637: 
1.8     ! jdf       638: *Note*: 1 GB is 2\*1024\*1024=2097152 blocks (1 block is 512 bytes, or
1.6       jdf       639: 0.5 kilobytes). Despite what the underlying hardware composing a RAID set is,
1.1       jdf       640: the RAID pseudo disk will always have 512 bytes/sector.
                    641: 
1.6       jdf       642: *Note*: In our example, the space allocated to the underlying `a:` slice
                    643: composing the RAID set differed between i386 and sparc64, therefore the total
1.1       jdf       644: sectors of the RAID volumes differs:
                    645: 
                    646: On i386:
                    647: 
                    648:      # disklabel -r -e -I raid0
                    649:     type: RAID
                    650:     disk: raid
                    651:     label: fictitious
                    652:     flags:
                    653:     bytes/sector: 512
                    654:     sectors/track: 128
                    655:     tracks/cylinder: 8
                    656:     sectors/cylinder: 1024
                    657:     cylinders: 19082
                    658:     total sectors: 19540864
                    659:     rpm: 3600
                    660:     interleave: 1
                    661:     trackskew: 0
                    662:     cylinderskew: 0
                    663:     headswitch: 0 # microseconds
                    664:     track-to-track seek: 0 # microseconds
                    665:     drivedata: 0
                    666:     
                    667:     #        size    offset     fstype [fsize bsize cpg/sgs]
                    668:      a:  19015680         0     4.2BSD      0     0     0  # (Cyl.      0 - 18569)
                    669:      b:    525184  19015680       swap                     # (Cyl.  18570 - 19082*)
                    670:      d:  19540864         0     unused      0     0        # (Cyl.      0 - 19082*)
                    671: 
                    672: On sparc64:
                    673: 
                    674:     # disklabel -r -e -I raid0
                    675:     [...snip...]
                    676:     total sectors: 19539968
                    677:     [...snip...]
                    678:     3 partitions:
                    679:     #        size    offset     fstype [fsize bsize cpg/sgs]
                    680:      a:  19251200         0     4.2BSD      0     0     0  # (Cyl.      0 -  18799)
                    681:      b:    288768  19251200       swap                     # (Cyl.  18800 -  19081)
                    682:      c:  19539968         0     unused      0     0        # (Cyl.      0 -  19081)
                    683: 
                    684: Next, format the newly created `/` partition as a 4.2BSD FFSv1 File System:
                    685: 
                    686:     # newfs -O 1 /dev/rraid0a
                    687:     /dev/rraid0a: 9285.0MB (19015680 sectors) block size 16384, fragment size 2048
                    688:             using 51 cylinder groups of 182.06MB, 11652 blks, 23040 inodes.
                    689:     super-block backups (for fsck -b #) at:
                    690:     32, 372896, 745760, 1118624, 1491488, 1864352, 2237216, 2610080, 2982944,
                    691:     ...............................................................................
                    692:     
                    693:     # fsck -fy /dev/rraid0a
                    694:     ** /dev/rraid0a
                    695:     ** File system is already clean
                    696:     ** Last Mounted on
                    697:     ** Phase 1 - Check Blocks and Sizes
                    698:     ** Phase 2 - Check Pathnames
                    699:     ** Phase 3 - Check Connectivity
                    700:     ** Phase 4 - Check Reference Counts
                    701:     ** Phase 5 - Check Cyl groups
                    702:     1 files, 1 used, 4679654 free (14 frags, 584955 blocks, 0.0% fragmentation)
                    703: 
                    704: ### Migrating System to RAID
                    705: 
1.6       jdf       706: The new RAID filesystems are now ready for use. We mount them under `/mnt` and
                    707: copy all files from the old system. This can be done using
                    708: [dump(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dump+8+NetBSD-5.0.1+i386) or
1.1       jdf       709: [pax(1)](http://netbsd.gw.com/cgi-bin/man-cgi?pax+1+NetBSD-5.0.1+i386).
                    710: 
                    711:     # mount /dev/raid0a /mnt
                    712:     # df -h /mnt
                    713:     Filesystem        Size       Used      Avail %Cap Mounted on
                    714:     /dev/raid0a       8.9G       2.0K       8.5G   0% /mnt
                    715:     # cd /; pax -v -X -rw -pe . /mnt
                    716:     [...snip...]
                    717: 
1.6       jdf       718: The NetBSD install now exists on the RAID filesystem. We need to fix the
                    719: mount-points in the new copy of `/etc/fstab` or the system will not come up
1.1       jdf       720: correctly. Replace instances of `wd0` with `raid0`.
                    721: 
1.6       jdf       722: The swap should be unconfigured upon shutdown to avoid parity errors on the RAID
1.1       jdf       723: device. This can be done with a simple, one-line setting in `/etc/rc.conf`.
                    724: 
                    725:     # vi /mnt/etc/rc.conf
                    726:     swapoff=YES
                    727: 
1.6       jdf       728: Next, the boot loader must be installed on Disk1/wd1. Failure to install the
1.1       jdf       729: loader on Disk1/wd1 will render the system un-bootable if Disk0/wd0 fails. You
                    730: should hope your system won't have to reboot when wd0 fails.
                    731: 
1.6       jdf       732: *Tip*: Because the BIOS/CMOS menus in many i386 based systems are misleading
                    733: with regard to device boot order. I highly recommend utilizing the `-o
                    734: timeout=X` option supported by the i386 1st stage boot loader. Setup unique
                    735: values for each disk as a point of reference so that you can easily determine
1.1       jdf       736: from which disk the system is booting.
                    737: 
1.6       jdf       738: *Caution*: Although it may seem logical to install the 1st stage boot block into
                    739: `/dev/rwd1{c,d}` (which is historically correct with NetBSD 1.6.x
                    740: [installboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?installboot+8+NetBSD-5.0.1+i386)
                    741: , this is no longer the case. If you make this mistake, the boot sector will
1.1       jdf       742: become irrecoverably damaged and you will need to start the process over again.
                    743: 
                    744: On i386, install the boot loader into `/dev/rwd1a`:
                    745: 
                    746:     # /usr/sbin/installboot -o timeout=30 -v /dev/rwd1a /usr/mdec/bootxx_ffsv1
                    747:     File system:         /dev/rwd1a
                    748:     Primary bootstrap:   /usr/mdec/bootxx_ffsv1
                    749:     Ignoring PBR with invalid magic in sector 0 of `/dev/rwd1a'
                    750:     Boot options:        timeout 30, flags 0, speed 9600, ioaddr 0, console pc
                    751: 
1.6       jdf       752: On sparc64, install the boot loader into `/dev/rwd1a` as well, however the `-o`
1.1       jdf       753: flag is unsupported (and un-needed thanks to OpenBoot):
                    754: 
                    755:     # /usr/sbin/installboot -v /dev/rwd1a /usr/mdec/bootblk
                    756:     File system:         /dev/rwd1a
                    757:     Primary bootstrap:   /usr/mdec/bootblk
                    758:     Bootstrap start sector: 1
                    759:     Bootstrap byte count:   5140
                    760:     Writing bootstrap
                    761: 
1.6       jdf       762: Finally the RAID set must be made auto-configurable and the system should be
1.1       jdf       763: rebooted. After the reboot everything is mounted from the RAID devices.
                    764: 
                    765:     # raidctl -v -A root raid0
                    766:     raid0: Autoconfigure: Yes
                    767:     raid0: Root: Yes
                    768:     # tail -2 /var/log/messages
                    769:     raid0: New autoconfig value is: 1
                    770:     raid0: New rootpartition value is: 1
                    771:     # raidctl -v -s raid0
                    772:     [...snip...]
                    773:        Autoconfig: Yes
                    774:        Root partition: Yes
                    775:        Last configured as: raid0
                    776:     [...snip...]
                    777:     # shutdown -r now
                    778: 
1.7       jdf       779: *Warning*: Always use
1.6       jdf       780: [shutdown(8)](http://netbsd.gw.com/cgi-bin/man-cgi?shutdown+8+NetBSD-5.0.1+i386)
                    781: when shutting down. Never simply use
                    782: [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386).
1.5       jdf       783: [reboot(8)](http://netbsd.gw.com/cgi-bin/man-cgi?reboot+8+NetBSD-5.0.1+i386)
1.6       jdf       784: will not properly run shutdown RC scripts and will not safely disable swap. This
1.1       jdf       785: will cause dirty parity at every reboot.
                    786: 
                    787: ### The first boot with RAID
                    788: 
1.6       jdf       789: At this point, temporarily configure your system to boot Disk1/wd1. See notes in
                    790: [[Testing Boot Blocks|guide/rf#adding-text-boot]] for details on this process.
                    791: The system should boot now and all filesystems should be on the RAID devices.
                    792: The RAID will be functional with a single component, however the set is not
1.1       jdf       793: fully functional because the bogus drive (wd9) has failed.
                    794: 
                    795:     # egrep -i "raid|root" /var/run/dmesg.boot
                    796:     raid0: RAID Level 1
                    797:     raid0: Components: component0[**FAILED**] /dev/wd1a
                    798:     raid0: Total Sectors: 19540864 (9541 MB)
                    799:     boot device: raid0
                    800:     root on raid0a dumps on raid0b
                    801:     root file system type: ffs
                    802:     
                    803:     # df -h
                    804:     Filesystem    Size     Used     Avail Capacity  Mounted on
                    805:     /dev/raid0a   8.9G     196M      8.3G     2%    /
                    806:     kernfs        1.0K     1.0K        0B   100%    /kern
                    807:     
                    808:     # swapctl -l
                    809:     Device      1K-blocks     Used    Avail Capacity  Priority
                    810:     /dev/raid0b    262592        0   262592     0%    0
                    811:     # raidctl -s raid0
                    812:     Components:
                    813:               component0: failed
                    814:                /dev/wd1a: optimal
                    815:     No spares.
                    816:     component0 status is: failed.  Skipping label.
                    817:     Component label for /dev/wd1a:
                    818:        Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
                    819:        Version: 2, Serial Number: 2009122601, Mod Counter: 65
                    820:        Clean: No, Status: 0
                    821:        sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
                    822:        Queue size: 100, blocksize: 512, numBlocks: 19540864
                    823:        RAID Level: 1
                    824:        Autoconfig: Yes
                    825:        Root partition: Yes
                    826:        Last configured as: raid0
                    827:     Parity status: DIRTY
                    828:     Reconstruction is 100% complete.
                    829:     Parity Re-write is 100% complete.
                    830:     Copyback is 100% complete.
                    831: 
                    832: ### Adding Disk0/wd0 to RAID
                    833: 
1.6       jdf       834: We will now add Disk0/wd0 as a component of the RAID. This will destroy the
                    835: original file system structure. On i386, the MBR disklabel will be unaffected
                    836: (remember we copied wd0's label to wd1 anyway) , therefore there is no need to
                    837: "zero" Disk0/wd0. However, we need to relabel Disk0/wd0 to have an identical
                    838: NetBSD disklabel layout as Disk1/wd1. Then we add Disk0/wd0 as "hot-spare" to
                    839: the RAID set and initiate the parity reconstruction for all RAID devices,
1.1       jdf       840: effectively bringing Disk0/wd0 into the RAID-1 set and "syncing up" both disks.
                    841: 
                    842:     # disklabel -r wd1 > /tmp/disklabel.wd1
                    843:     # disklabel -R -r wd0 /tmp/disklabel.wd1
                    844: 
1.6       jdf       845: As a last-minute sanity check, you might want to use
                    846: [diff(1)](http://netbsd.gw.com/cgi-bin/man-cgi?diff+1+NetBSD-5.0.1+i386) to
                    847: ensure that the disklabels of Disk0/wd0 match Disk1/wd1. You should also backup
1.1       jdf       848: these files for reference in the event of an emergency.
                    849: 
                    850:     # disklabel -r wd0 > /tmp/disklabel.wd0
                    851:     # disklabel -r wd1 > /tmp/disklabel.wd1
                    852:     # diff /tmp/disklabel.wd0 /tmp/disklabel.wd1
                    853:     # fdisk /dev/rwd0 > /tmp/fdisk.wd0
                    854:     # fdisk /dev/rwd1 > /tmp/fdisk.wd1
                    855:     # diff /tmp/fdisk.wd0 /tmp/fdisk.wd1
                    856:     # mkdir /root/RFbackup
                    857:     # cp -p /tmp/{disklabel,fdisk}* /root/RFbackup
                    858: 
                    859: Once you are sure, add Disk0/wd0 as a spare component, and start reconstruction:
                    860: 
                    861:     # raidctl -v -a /dev/wd0a raid0
                    862:     /netbsd: Warning: truncating spare disk /dev/wd0a to 241254528 blocks
                    863:     # raidctl -v -s raid0
                    864:     Components:
                    865:               component0: failed
                    866:                /dev/wd1a: optimal
                    867:     Spares:
                    868:                /dev/wd0a: spare
                    869:     [...snip...]
                    870:     # raidctl -F component0 raid0
                    871:     RECON: initiating reconstruction on col 0 -> spare at col 2
                    872:      11% |****                                   | ETA:    04:26 \
                    873: 
1.6       jdf       874: Depending on the speed of your hardware, the reconstruction time will vary. You
1.1       jdf       875: may wish to watch it on another terminal (note that you can interrupt
                    876: `raidctl -S` any time without stopping the synchronisation):
                    877: 
                    878:     # raidctl -S raid0
                    879:     Reconstruction is 0% complete.
                    880:     Parity Re-write is 100% complete.
                    881:     Copyback is 100% complete.
                    882:     Reconstruction status:
                    883:       17% |******                                 | ETA: 03:08 -
                    884: 
                    885: After reconstruction, both disks should be *optimal*.
                    886: 
                    887:     # tail -f /var/log/messages
                    888:     raid0: Reconstruction of disk at col 0 completed
                    889:     raid0: Recon time was 1290.625033 seconds, accumulated XOR time was 0 us (0.000000)
                    890:     raid0:  (start time 1093407069 sec 145393 usec, end time 1093408359 sec 770426 usec)
                    891:     raid0: Total head-sep stall count was 0
                    892:     raid0: 305318 recon event waits, 1 recon delays
                    893:     raid0: 1093407069060000 max exec ticks
                    894:     
                    895:     # raidctl -v -s raid0
                    896:     Components:
                    897:                component0: spared
                    898:                /dev/wd1a: optimal
                    899:     Spares:
                    900:          /dev/wd0a: used_spare
                    901:          [...snip...]
                    902: 
1.6       jdf       903: When the reconstruction is finished we need to install the boot loader on the
1.1       jdf       904: Disk0/wd0. On i386, install the boot loader into `/dev/rwd0a`:
                    905: 
                    906:     # /usr/sbin/installboot -o timeout=15 -v /dev/rwd0a /usr/mdec/bootxx_ffsv1
                    907:     File system:         /dev/rwd0a
                    908:     Primary bootstrap:   /usr/mdec/bootxx_ffsv1
                    909:     Boot options:        timeout 15, flags 0, speed 9600, ioaddr 0, console pc
                    910: 
                    911: On sparc64:
                    912: 
                    913:     # /usr/sbin/installboot -v /dev/rwd0a /usr/mdec/bootblk
                    914:     File system:         /dev/rwd0a
                    915:     Primary bootstrap:   /usr/mdec/bootblk
                    916:     Bootstrap start sector: 1
                    917:     Bootstrap byte count:   5140
                    918:     Writing bootstrap
                    919: 
1.6       jdf       920: And finally, reboot the machine one last time before proceeding. This is
                    921: required to migrate Disk0/wd0 from status "used\_spare" as "Component0" to state
                    922: "optimal". Refer to notes in the next section regarding verification of clean
1.1       jdf       923: parity after each reboot.
                    924: 
                    925:     # shutdown -r now
                    926: 
                    927: ### Testing Boot Blocks
                    928: 
1.6       jdf       929: At this point, you need to ensure that your system's hardware can properly boot
                    930: using the boot blocks on either disk. On i386, this is a hardware-dependent
                    931: process that may be done via your motherboard CMOS/BIOS menu or your controller
1.1       jdf       932: card's configuration menu.
                    933: 
1.6       jdf       934: On i386, use the menu system on your machine to set the boot device order /
                    935: priority to Disk1/wd1 before Disk0/wd0. The examples here depict a generic Award
                    936: 
1.1       jdf       937: BIOS.
                    938: 
1.4       jdf       939: ![Award BIOS i386 Boot Disk1/wd1](/guide/images/raidframe_awardbios2.png)
1.8     ! jdf       940: 
1.1       jdf       941: **Award BIOS i386 Boot Disk1/wd1**
                    942: 
                    943: Save changes and exit:
                    944: 
                    945:     >> NetBSD/i386 BIOS Boot, Revision 5.2 (from NetBSD 5.0.2)
                    946:     >> (builds@b7, Sun Feb 7 00:30:50 UTC 2010)
                    947:     >> Memory: 639/130048 k
                    948:     Press return to boot now, any other key for boot menu
                    949:     booting hd0a:netbsd - starting in 30
                    950: 
1.5       jdf       951: You can determine that the BIOS is reading Disk1/wd1 because the timeout of th
1.6       jdf       952: 
1.5       jdf       953: boot loader is 30 seconds instead of 15. After the reboot, re-enter the BIOS an
1.1       jdf       954: configure the drive boot order back to the default:
                    955: 
1.4       jdf       956: ![Award BIOS i386 Boot Disk0/wd0](/guide/images/raidframe_awardbios1.png)
1.8     ! jdf       957: 
1.1       jdf       958: **Award BIOS i386 Boot Disk0/wd0**
                    959: 
                    960: Save changes and exit:
                    961: 
                    962:     >> NetBSD/i386 BIOS Boot, Revision 5.2 (from NetBSD 5.0.2)
                    963:     >> Memory: 639/130048 k
                    964:     Press return to boot now, any other key for boot menu
                    965:     booting hd0a:netbsd - starting in 15
                    966: 
1.6       jdf       967: Notice how your custom kernel detects controller/bus/drive assignments
                    968: independent of what the BIOS assigns as the boot disk. This is the expected
1.1       jdf       969: behavior.
                    970: 
                    971: On sparc64, use the Sun OpenBoot **devalias** to confirm that both disks are bootable:
                    972: 
                    973:     Sun Ultra 5/10 UPA/PCI (UltraSPARC-IIi 400MHz), No Keyboard
                    974:     OpenBoot 3.15, 128 MB memory installed, Serial #nnnnnnnn.
                    975:     Ethernet address 8:0:20:a5:d1:3b, Host ID: nnnnnnnn.
                    976:     
                    977:     ok devalias
                    978:     [...snip...]
                    979:     cdrom /pci@1f,0/pci@1,1/ide@3/cdrom@2,0:f
                    980:     disk /pci@1f,0/pci@1,1/ide@3/disk@0,0
                    981:     disk3 /pci@1f,0/pci@1,1/ide@3/disk@3,0
                    982:     disk2 /pci@1f,0/pci@1,1/ide@3/disk@2,0
                    983:     disk1 /pci@1f,0/pci@1,1/ide@3/disk@1,0
                    984:     disk0 /pci@1f,0/pci@1,1/ide@3/disk@0,0
                    985:     [...snip...]
                    986:     
                    987:     ok boot disk0 netbsd
                    988:     Initializing Memory [...]
                    989:     Boot device /pci/pci/ide@3/disk@0,0 File and args: netbsd
                    990:     NetBSD IEEE 1275 Bootblock
                    991:     >> NetBSD/sparc64 OpenFirmware Boot, Revision 1.13
                    992:     >> (builds@b7.netbsd.org, Wed Jul 29 23:43:42 UTC 2009)
                    993:     loadfile: reading header
                    994:     elf64_exec: Booting [...]
                    995:     symbols @ [....]
                    996:      Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
                    997:          2006, 2007, 2008, 2009
                    998:          The NetBSD Foundation, Inc.  All rights reserved.
                    999:      Copyright (c) 1982, 1986, 1989, 1991, 1993
                   1000:          The Regents of the University of California.  All rights reserved.
                   1001:     [...snip...]
                   1002: 
                   1003: And the second disk:
                   1004: 
                   1005:     ok boot disk2 netbsd
                   1006:     Initializing Memory [...]
                   1007:     Boot device /pci/pci/ide@3/disk@2,0: File and args:netbsd
                   1008:     NetBSD IEEE 1275 Bootblock
                   1009:     >> NetBSD/sparc64 OpenFirmware Boot, Revision 1.13
                   1010:     >> (builds@b7.netbsd.org, Wed Jul 29 23:43:42 UTC 2009)
                   1011:     loadfile: reading header
                   1012:     elf64_exec: Booting [...]
                   1013:     symbols @ [....]
                   1014:      Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
                   1015:          2006, 2007, 2008, 2009
                   1016:          The NetBSD Foundation, Inc.  All rights reserved.
                   1017:      Copyright (c) 1982, 1986, 1989, 1991, 1993
                   1018:          The Regents of the University of California.  All rights reserved.
                   1019:     [...snip...]
                   1020: 
1.6       jdf      1021: At each boot, the following should appear in the NetBSD kernel
1.1       jdf      1022: [dmesg(8)](http://netbsd.gw.com/cgi-bin/man-cgi?dmesg+8+NetBSD-5.0.1+i386) :
                   1023: 
                   1024:     Kernelized RAIDframe activated
                   1025:     raid0: RAID Level 1
                   1026:     raid0: Components: /dev/wd0a /dev/wd1a
                   1027:     raid0: Total Sectors: 19540864 (9541 MB)
                   1028:     boot device: raid0
                   1029:     root on raid0a dumps on raid0b
                   1030:     root file system type: ffs
                   1031: 
1.6       jdf      1032: Once you are certain that both disks are bootable, verify the RAID parity is
1.1       jdf      1033: clean after each reboot:
                   1034: 
                   1035:     # raidctl -v -s raid0
                   1036:     Components:
                   1037:               /dev/wd0a: optimal
                   1038:               /dev/wd1a: optimal
                   1039:     No spares.
                   1040:     [...snip...]
                   1041:     Component label for /dev/wd0a:
                   1042:        Row: 0, Column: 0, Num Rows: 1, Num Columns: 2
                   1043:        Version: 2, Serial Number: 2009122601, Mod Counter: 67
                   1044:        Clean: No, Status: 0
                   1045:        sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
                   1046:        Queue size: 100, blocksize: 512, numBlocks: 19540864
                   1047:        RAID Level: 1
                   1048:        Autoconfig: Yes
                   1049:        Root partition: Yes
                   1050:        Last configured as: raid0
                   1051:     Component label for /dev/wd1a:
                   1052:        Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
                   1053:        Version: 2, Serial Number: 2009122601, Mod Counter: 67
                   1054:        Clean: No, Status: 0
                   1055:        sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
                   1056:        Queue size: 100, blocksize: 512, numBlocks: 19540864
                   1057:        RAID Level: 1
                   1058:        Autoconfig: Yes
                   1059:        Root partition: Yes
                   1060:        Last configured as: raid0
                   1061:     Parity status: clean
                   1062:     Reconstruction is 100% complete.
                   1063:     Parity Re-write is 100% complete.
                   1064:     Copyback is 100% complete.
                   1065: 

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb