File:  [NetBSD Developer Wiki] / wikisrc / guide / raidframe.mdwn
Revision 1.13: download - view: text, annotated - select for diffs
Fri Jun 19 19:18:31 2015 UTC (5 years ago) by plunky
Branches: MAIN
CVS tags: HEAD
replace direct links to manpages on netbsd.gw.com with templates

    1: **Contents**
    2: 
    3: [[!toc levels=3]]
    4: 
    5: # NetBSD RAIDframe
    6: 
    7: ## RAIDframe Introduction
    8: 
    9: ### About RAIDframe
   10: 
   11: NetBSD uses the [CMU RAIDframe](http://www.pdl.cmu.edu/RAIDframe/) software for
   12: its RAID subsystem. NetBSD is the primary platform for RAIDframe development.
   13: RAIDframe can also be found in older versions of FreeBSD and OpenBSD. NetBSD
   14: also has another way of bundling disks, the
   15: [[!template id=man name="ccd" section="4"]] subsystem
   16: (see [Concatenated Disk Device](/guide/ccd)). You should possess some [basic
   17: knowledge](http://www.acnc.com/04_00.html) about RAID concepts and terminology
   18: before continuing. You should also be at least familiar with the different
   19: levels of RAID - Adaptec provides an [excellent
   20: reference](http://www.adaptec.com/en-US/_common/compatibility/_education/RAID_level_compar_wp.htm),
   21: and the [[!template id=man name="raid" section="4"]]
   22: manpage contains a short overview too.
   23: 
   24: ### A warning about Data Integrity, Backups, and High Availability
   25: 
   26: RAIDframe is a Software RAID implementation, as opposed to Hardware RAID. As
   27: such, it does not need special disk controllers supported by NetBSD. System
   28: administrators should give a great deal of consideration to whether software
   29: RAID or hardware RAID is more appropriate for their "Mission Critical"
   30: applications. For some projects you might consider the use of many of the
   31: hardware RAID devices [supported by
   32: NetBSD](http://www.NetBSD.org/support/hardware/). It is truly at your discretion
   33: what type of RAID you use, but it is recommend that you consider factors such
   34: as: manageability, commercial vendor support, load-balancing and failover, etc.
   35: 
   36: Depending on the RAID level used, RAIDframe does provide redundancy in the event
   37: of a hardware failure. However, it is *not* a replacement for reliable backups!
   38: Software and user-error can still cause data loss. RAIDframe may be used as a
   39: mechanism for facilitating backups in systems without backup hardware, but this
   40: is not an ideal configuration. Finally, with regard to "high availability", RAID
   41: is only a very small component to ensuring data availability.
   42: 
   43: Once more for good measure: *Back up your data!*
   44: 
   45: ### Hardware versus Software RAID
   46: 
   47: If you run a server, it will most probably already have a Hardware RAID
   48: controller. There are reasons for and against using a Software RAID, depending
   49: on the scenario.
   50: 
   51: In general, a Software RAID is well suited for low-IO system disks. If you run a
   52: Software RAID, you can exchange disks and disk controllers, or even move the
   53: disks to a completely different machine. The computational overhead for the RAID
   54: is negligible if there is only few disk IO operations.
   55: 
   56: If you need much IO, you should use a Hardware RAID. With a Software RAID, the
   57: redundancy data has to be transferred via the bus your disk controller is
   58: connected to. With a Hardware RAID, you transfer data only once - the redundancy
   59: computation and transfer is done by the controller.
   60: 
   61: ### Getting Help
   62: 
   63: If you encounter problems using RAIDframe, you have several options for
   64: obtaining help.
   65: 
   66:  1. Read the RAIDframe man pages:
   67:     [[!template id=man name="raid" section="4"]] and
   68:     [[!template id=man name="raidctl" section="8"]]
   69:     thoroughly.
   70: 
   71:  2. Search the mailing list archives. Unfortunately, there is no NetBSD list
   72:     dedicated to RAIDframe support. Depending on the nature of the problem, posts
   73:     tend to end up in a variety of lists. At a very minimum, search
   74:     [netbsd-help](http://mail-index.NetBSD.org/netbsd-help/),
   75:     [netbsd-users@NetBSD.org](http://mail-index.NetBSD.org/netbsd-users/),
   76:     [current-users@NetBSD.org](http://mail-index.NetBSD.org/current-users/). Also
   77:     search the list for the NetBSD platform on which you are using RAIDframe:
   78:     port-*`${ARCH}`*@NetBSD.org.
   79: 
   80:     *Caution*: Because RAIDframe is constantly undergoing development, some information in
   81: 	mailing list archives has the potential of being dated and inaccurate.
   82: 
   83:  3. Search the [Problem Report
   84:     database](http://www.NetBSD.org/support/send-pr.html).
   85: 
   86:  4. If your problem persists: Post to the mailing list most appropriate
   87:     (judgment call). Collect as much verbosely detailed information as possible
   88:     before posting: Include your
   89:     [[!template id=man name="dmesg" section="8"]]
   90:     output from `/var/run/dmesg.boot`, your kernel
   91:     [[!template id=man name="config" section="5"]] ,
   92:     your `/etc/raid[0-9].conf`, any relevant errors on `/dev/console`,
   93:     `/var/log/messages`, or to `stdout/stderr` of
   94:     [[!template id=man name="raidctl" section="8"]].
   95:     The output of **raidctl -s** (if available) will be useful as well. Also
   96:     include details on the troubleshooting steps you've taken thus far, exactly
   97:     when the problem started, and any notes on recent changes that may have
   98:     prompted the problem to develop. Remember to be patient when waiting for a
   99:     response.
  100: 
  101: ## Setup RAIDframe Support
  102: 
  103: The use of RAID will require software and hardware configuration changes.
  104: 
  105: ### Kernel Support
  106: 
  107: The GENERIC kernel already has support for RAIDframe. If you have built a custom
  108: kernel for your environment the kernel configuration must have the following
  109: options:
  110: 
  111:     pseudo-device   raid            8       # RAIDframe disk driver
  112:     options         RAID_AUTOCONFIG         # auto-configuration of RAID components
  113: 
  114: The RAID support must be detected by the NetBSD kernel, which can be checked by
  115: looking at the output of the
  116: [[!template id=man name="dmesg" section="8"]]
  117: command.
  118: 
  119:     # dmesg|grep -i raid
  120:     Kernelized RAIDframe activated
  121: 
  122: Historically, the kernel must also contain static mappings between bus addresses
  123: and device nodes in `/dev`. This used to ensure consistency of devices within
  124: RAID sets in the event of a device failure after reboot. Since NetBSD 1.6,
  125: however, using the auto-configuration features of RAIDframe has been recommended
  126: over statically mapping devices. The auto-configuration features allow drives to
  127: move around on the system, and RAIDframe will automatically determine which
  128: components belong to which RAID sets.
  129: 
  130: ### Power Redundancy and Disk Caching
  131: 
  132: If your system has an Uninterruptible Power Supply (UPS), if your system has
  133: redundant power supplies, or your disk controller has a battery, you should
  134: consider enabling the read and write caches on your drives. On systems with
  135: redundant power, this will improve drive performance. On systems without
  136: redundant power, the write cache could endanger the integrity of RAID data in
  137: the event of a power loss.
  138: 
  139: The [[!template id=man name="dkctl" section="8"]]
  140: utility to can be used for this on all kinds of disks that support the operation
  141: (SCSI, EIDE, SATA, ...):
  142: 
  143:     # dkctl wd0 getcache
  144:     /dev/rwd0d: read cache enabled
  145:     /dev/rwd0d: read cache enable is not changeable
  146:     /dev/rwd0d: write cache enable is changeable
  147:     /dev/rwd0d: cache parameters are not savable
  148:     # dkctl wd0 setcache rw
  149:     # dkctl wd0 getcache
  150:     /dev/rwd0d: read cache enabled
  151:     /dev/rwd0d: write-back cache enabled
  152:     /dev/rwd0d: read cache enable is not changeable
  153:     /dev/rwd0d: write cache enable is changeable
  154:     /dev/rwd0d: cache parameters are not savable
  155: 
  156: ## Example: RAID-1 Root Disk
  157: 
  158: This example explains how to setup RAID-1 root disk. With RAID-1 components are
  159: mirrored and therefore the server can be fully functional in the event of a
  160: single component failure. The goal is to provide a level of redundancy that will
  161: allow the system to encounter a component failure on either component disk in
  162: the RAID and:
  163: 
  164:  * Continue normal operations until a maintenance window can be scheduled.
  165:  * Or, in the unlikely event that the component failure causes a system reboot,
  166:    be able to quickly reconfigure the system to boot from the remaining
  167:    component (platform dependent).
  168: 
  169: ![RAID-1 Disk Logical Layout](/guide/images/raidframe_raidl1-diskdia.png)  
  170: **RAID-1 Disk Logical Layout**
  171: 
  172: Because RAID-1 provides both redundancy and performance improvements, its most
  173: practical application is on critical "system" partitions such as `/`, `/usr`,
  174: `/var`, `swap`, etc., where read operations are more frequent than write
  175: operations. For other file systems, such as `/home` or `/var/`, other RAID
  176: levels might be considered (see the references above). If one were simply
  177: creating a generic RAID-1 volume for a non-root file system, the cookie-cutter
  178: examples from the man page could be followed, but because the root volume must
  179: be bootable, certain special steps must be taken during initial setup.
  180: 
  181: *Note*: This example will outline a process that differs only slightly between
  182: the i386 and sparc64 platforms. In an attempt to reduce excessive duplication of
  183: content, where differences do exist and are cosmetic in nature, they will be
  184: pointed out using a section such as this. If the process is drastically
  185: different, the process will branch into separate, platform dependent steps.
  186: 
  187: ### Pseudo-Process Outline
  188: 
  189: Although a much more refined process could be developed using a custom copy of
  190: NetBSD installed on custom-developed removable media, presently the NetBSD
  191: install media lacks RAIDframe tools and support, so the following pseudo process
  192: has become the de facto standard for setting up RAID-1 Root.
  193: 
  194:  1. Install a stock NetBSD onto Disk0 of your system.
  195: 
  196: 
  197:     ![Perform generic install onto Disk0/wd0](/guide/images/raidframe_r1r-pp1.png)  
  198:     **Perform generic install onto Disk0/wd0**
  199: 
  200:  2. Use the installed system on Disk0/wd0 to setup a RAID Set composed of
  201:     Disk1/wd1 only.
  202: 
  203:     ![Setup RAID Set](/guide/images/raidframe_r1r-pp2.png)  
  204:     **Setup RAID Set**
  205: 
  206:  3. Reboot the system off the Disk1/wd1 with the newly created RAID volume.
  207: 
  208: 
  209:     ![Reboot using Disk1/wd1 of RAID](/guide/images/raidframe_r1r-pp3.png)  
  210:     **Reboot using Disk1/wd1 of RAID**
  211: 
  212: 
  213:  4. Add/re-sync Disk0/wd0 back into the RAID set.
  214: 
  215:     ![Mirror Disk1/wd1 back to Disk0/wd0](/guide/images/raidframe_r1r-pp4.png)  
  216:     **Mirror Disk1/wd1 back to Disk0/wd0**
  217: 
  218: ### Hardware Review
  219: 
  220: At present, the alpha, amd64, i386, pmax, sparc, sparc64, and vax NetBSD
  221: platforms support booting from RAID-1. Booting is not supported from any other
  222: RAID level. Booting from a RAID set is accomplished by teaching the 1st stage
  223: boot loader to understand both 4.2BSD/FFS and RAID partitions. The 1st boot
  224: block code only needs to know enough about the disk partitions and file systems
  225: to be able to read the 2nd stage boot blocks. Therefore, at any time, the
  226: system's BIOS/firmware must be able to read a drive with 1st stage boot blocks
  227: installed. On the i386 platform, configuring this is entirely dependent on the
  228: vendor of the controller card/host bus adapter to which your disks are
  229: connected. On sparc64 this is controlled by the IEEE 1275 Sun OpenBoot Firmware.
  230: 
  231: This article assumes two identical IDE disks (`/dev/wd{0,1}`) which we are going
  232: to mirror (RAID-1). These disks are identified as:
  233: 
  234:     # grep ^wd /var/run/dmesg.boot
  235:     wd0 at atabus0 drive 0: <WDC WD100BB-75CLB0>
  236:     wd0: drive supports 16-sector PIO transfers, LBA addressing
  237:     wd0: 9541 MB, 19386 cyl, 16 head, 63 sec, 512 bytes/sect x 19541088 sectors
  238:     wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
  239:     wd0(piixide0:0:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA data transfers)
  240:     
  241:     wd1 at atabus1 drive 0: <WDC WD100BB-75CLB0>
  242:     wd1: drive supports 16-sector PIO transfers, LBA addressing
  243:     wd1: 9541 MB, 19386 cyl, 16 head, 63 sec, 512 bytes/sect x 19541088 sectors
  244:     wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
  245:     wd1(piixide0:1:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA data transfers)
  246: 
  247: *Note*: If you are using SCSI, replace `/dev/{,r}wd{0,1}` with
  248: `/dev/{,r}sd{0,1}`.
  249: 
  250: In this example, both disks are jumpered as Master on separate channels on the
  251: same controller. You usually wouldn't want to have both disks on the same bus on
  252: the same controller; this creates a single point of failure. Ideally you would
  253: have the disks on separate channels on separate controllers. Nonetheless, in
  254: most cases the most critical point is the hard disk, so having redundant
  255: channels or controllers is not that important. Plus, having more channels or
  256: controllers increases costs. Some SCSI controllers have multiple channels on the
  257: same controller, however, a SCSI bus reset on one channel could adversely affect
  258: the other channel if the ASIC/IC becomes overloaded. The trade-off with two
  259: controllers is that twice the bandwidth is used on the system bus. For purposes
  260: of simplification, this example shows two disks on different channels on the
  261: same controller.
  262: 
  263: *Note*: RAIDframe requires that all components be of the same size. Actually, it
  264: will use the lowest common denominator among components of dissimilar sizes. For
  265: purposes of illustration, the example uses two disks of identical geometries.
  266: Also, consider the availability of replacement disks if a component suffers a
  267: critical hardware failure.
  268: 
  269: *Tip*: Two disks of identical vendor model numbers could have different
  270: geometries if the drive possesses "grown defects". Use a low-level program to
  271: examine the grown defects table of the disk. These disks are obviously
  272: suboptimal candidates for use in RAID and should be avoided.
  273: 
  274: ### Initial Install on Disk0/wd0
  275: 
  276: Perform a very generic installation onto your Disk0/wd0. Follow the `INSTALL`
  277: instructions for your platform. Install all the sets but do not bother
  278: customizing anything other than the kernel as it will be overwritten.
  279: 
  280: *Tip*: On i386, during the sysinst install, when prompted if you want to `use
  281: the entire disk for NetBSD`, answer `yes`.
  282: 
  283:  * [Installing NetBSD: Preliminary considerations and preparations](/guide/inst)
  284:  * [NetBSD/i386 Install](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/i386/INSTALL.html)
  285:  * [NetBSD/sparc64 Install](http://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/sparc64/INSTALL.html)
  286: 
  287: Once the installation is complete, you should examine the
  288: [[!template id=man name="disklabel" section="8"]]
  289: and [[!template id=man name="fdisk" section="8"]] /
  290: [[!template id=man name="sunlabel" section="8"]]
  291: outputs on the system:
  292: 
  293:     # df
  294:     Filesystem   1K-blocks        Used       Avail %Cap Mounted on
  295:     /dev/wd0a       9487886      502132     8511360   5% /
  296: 
  297: On i386:
  298: 
  299:     # disklabel -r wd0
  300:     type: unknown
  301:     disk: Disk00
  302:     label:
  303:     flags:
  304:     bytes/sector: 512
  305:     sectors/track: 63
  306:     tracks/cylinder: 16
  307:     sectors/cylinder: 1008
  308:     cylinders: 19386
  309:     total sectors: 19541088
  310:     rpm: 3600
  311:     interleave: 1
  312:     trackskew: 0
  313:     cylinderskew: 0
  314:     headswitch: 0           # microseconds
  315:     track-to-track seek: 0  # microseconds
  316:     drivedata: 0
  317:     
  318:     16 partitions:
  319:     #        size    offset     fstype [fsize bsize cpg/sgs]
  320:      a:  19276992        63     4.2BSD   1024  8192 46568  # (Cyl.      0* - 19124*)
  321:      b:    264033  19277055       swap                     # (Cyl.  19124* - 19385)
  322:      c:  19541025        63     unused      0     0        # (Cyl.      0* - 19385)
  323:      d:  19541088         0     unused      0     0        # (Cyl.      0 - 19385)
  324:     
  325:     # fdisk /dev/rwd0d
  326:     Disk: /dev/rwd0d
  327:     NetBSD disklabel disk geometry:
  328:     cylinders: 19386, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
  329:     total sectors: 19541088
  330:     
  331:     BIOS disk geometry:
  332:     cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
  333:     total sectors: 19541088
  334:     
  335:     Partition table:
  336:     0: NetBSD (sysid 169)
  337:         start 63, size 19541025 (9542 MB, Cyls 0-1216/96/1), Active
  338:     1: <UNUSED>
  339:     2: <UNUSED>
  340:     3: <UNUSED>
  341:     Bootselector disabled.
  342:     First active partition: 0
  343: 
  344: On Sparc64 the command and output differ slightly:
  345: 
  346:     # disklabel -r wd0
  347:     type: unknown
  348:     disk: Disk0
  349:     [...snip...]
  350:     8 partitions:
  351:     #        size    offset     fstype [fsize bsize cpg/sgs]
  352:      a:  19278000         0     4.2BSD   1024  8192 46568  # (Cyl.      0 -  19124)
  353:      b:    263088  19278000       swap                     # (Cyl.  19125 -  19385)
  354:      c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)
  355:     
  356:     # sunlabel /dev/rwd0c
  357:     sunlabel> P
  358:     a: start cyl =      0, size = 19278000 (19125/0/0 - 9413.09Mb)
  359:     b: start cyl =  19125, size =   263088 (261/0/0 - 128.461Mb)
  360:     c: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
  361: 
  362: ### Preparing Disk1/wd1
  363: 
  364: Once you have a stock install of NetBSD on Disk0/wd0, you are ready to begin.
  365: Disk1/wd1 will be visible and unused by the system. To setup Disk1/wd1, you will
  366: use
  367: [[!template id=man name="disklabel" section="8"]]
  368: to allocate the entire second disk to the RAID-1 set.
  369: 
  370: *Tip*:
  371: > The best way to ensure that Disk1/wd1 is completely empty is to 'zero'
  372: > out the first few sectors of the disk with
  373: > [[!template id=man name="dd" section="1"]] . This will
  374: > erase the MBR (i386) or Sun disk label (sparc64), as well as the NetBSD disk
  375: > label. If you make a mistake at any point during the RAID setup process, you can
  376: > always refer to this process to restore the disk to an empty state.
  377: > 
  378: > *Note*: On sparc64, use `/dev/rwd1c` instead of `/dev/rwd1d`!
  379: > 
  380: >     # dd if=/dev/zero of=/dev/rwd1d bs=8k count=1
  381: >     1+0 records in
  382: >     1+0 records out
  383: >     8192 bytes transferred in 0.003 secs (2730666 bytes/sec)
  384: > 
  385: > Once this is complete, on i386, verify that both the MBR and NetBSD disk labels
  386: > are gone. On sparc64, verify that the Sun Disk label is gone as well.
  387: > 
  388: > On i386:
  389: > 
  390: >     # fdisk /dev/rwd1d
  391: >     
  392: >     fdisk: primary partition table invalid, no magic in sector 0
  393: >     Disk: /dev/rwd1d
  394: >     NetBSD disklabel disk geometry:
  395: >     cylinders: 19386, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
  396: >     total sectors: 19541088
  397: >     
  398: >     BIOS disk geometry:
  399: >     cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
  400: >     total sectors: 19541088
  401: >     
  402: >     Partition table:
  403: >     0: <UNUSED>
  404: >     1: <UNUSED>
  405: >     2: <UNUSED>
  406: >     3: <UNUSED>
  407: >     Bootselector disabled.
  408: >     
  409: >     # disklabel -r wd1
  410: >     
  411: >     [...snip...]
  412: >     16 partitions:
  413: >     #        size    offset     fstype [fsize bsize cpg/sgs]
  414: >      c:  19541025        63     unused      0     0        # (Cyl.      0* - 19385)
  415: >      d:  19541088         0     unused      0     0        # (Cyl.      0 - 19385)
  416: > 
  417: > On sparc64:
  418: > 
  419: >     # sunlabel /dev/rwd1c
  420: >     
  421: >     sunlabel: bogus label on `/dev/wd1c' (bad magic number)
  422: >     
  423: >     # disklabel -r wd1
  424: >     
  425: >     [...snip...]
  426: >     3 partitions:
  427: >     #        size    offset     fstype [fsize bsize cpg/sgs]
  428: >      c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)
  429: >     disklabel: boot block size 0
  430: >     disklabel: super block size 0
  431: 
  432: Now that you are certain the second disk is empty, on i386 you must establish
  433: the MBR on the second disk using the values obtained from Disk0/wd0 above. We
  434: must remember to mark the NetBSD partition active or the system will not boot.
  435: You must also create a NetBSD disklabel on Disk1/wd1 that will enable a RAID
  436: volume to exist upon it. On sparc64, you will need to simply
  437: [[!template id=man name="disklabel" section="8"]]
  438: the second disk which will write the proper Sun Disk Label.
  439: 
  440: *Tip*:
  441: [[!template id=man name="disklabel" section="8"]]
  442: will use your shell' s environment variable `$EDITOR` variable to edit the
  443: disklabel. The default is
  444: [[!template id=man name="vi" section="1"]]
  445: 
  446: On i386:
  447: 
  448:     # fdisk -0ua /dev/rwd1d
  449:     fdisk: primary partition table invalid, no magic in sector 0
  450:     Disk: /dev/rwd1d
  451:     NetBSD disklabel disk geometry:
  452:     cylinders: 19386, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
  453:     total sectors: 19541088
  454:     
  455:     BIOS disk geometry:
  456:     cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
  457:     total sectors: 19541088
  458:     
  459:     Do you want to change our idea of what BIOS thinks? [n]
  460:     
  461:     Partition 0:
  462:     <UNUSED>
  463:     The data for partition 0 is:
  464:     <UNUSED>
  465:     sysid: [0..255 default: 169]
  466:     start: [0..1216cyl default: 63, 0cyl, 0MB]
  467:     size: [0..1216cyl default: 19541025, 1216cyl, 9542MB]
  468:     bootmenu: []
  469:     Do you want to change the active partition? [n] y
  470:     Choosing 4 will make no partition active.
  471:     active partition: [0..4 default: 0] 0
  472:     Are you happy with this choice? [n] y
  473:     
  474:     We haven't written the MBR back to disk yet.  This is your last chance.
  475:     Partition table:
  476:     0: NetBSD (sysid 169)
  477:         start 63, size 19541025 (9542 MB, Cyls 0-1216/96/1), Active
  478:     1: <UNUSED>
  479:     2: <UNUSED>
  480:     3: <UNUSED>
  481:     Bootselector disabled.
  482:     Should we write new partition table? [n] y
  483:     
  484:     # disklabel -r -e -I wd1
  485:     type: unknown
  486:     disk: Disk1
  487:     label:
  488:     flags:
  489:     bytes/sector: 512
  490:     sectors/track: 63
  491:     tracks/cylinder: 16
  492:     sectors/cylinder: 1008
  493:     cylinders: 19386
  494:     total sectors: 19541088
  495:     [...snip...]
  496:     16 partitions:
  497:     #        size    offset     fstype [fsize bsize cpg/sgs]
  498:      a:  19541025        63       RAID                     # (Cyl.      0*-19385)
  499:      c:  19541025        63     unused      0     0        # (Cyl.      0*-19385)
  500:      d:  19541088         0     unused      0     0        # (Cyl.      0 -19385)
  501: 
  502: On sparc64:
  503: 
  504:     # disklabel -r -e -I wd1
  505:     type: unknown
  506:     disk: Disk1
  507:     label:
  508:     flags:
  509:     bytes/sector: 512
  510:     sectors/track: 63
  511:     tracks/cylinder: 16
  512:     sectors/cylinder: 1008
  513:     cylinders: 19386
  514:     total sectors: 19541088
  515:     [...snip...]
  516:     3 partitions:
  517:     #        size    offset     fstype [fsize bsize cpg/sgs]
  518:      a:  19541088         0       RAID                     # (Cyl.      0 -  19385)
  519:      c:  19541088         0     unused      0     0        # (Cyl.      0 -  19385)
  520:     
  521:     # sunlabel /dev/rwd1c
  522:     sunlabel> P
  523:     a: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
  524:     c: start cyl =      0, size = 19541088 (19386/0/0 - 9541.55Mb)
  525: 
  526: *Note*: On i386, the `c:` and `d:` slices are reserved. `c:` represents the 
  527: NetBSD portion of the disk. `d:` represents the entire disk. Because we want to 
  528: allocate the entire NetBSD MBR partition to RAID, and because `a:` resides 
  529: within the bounds of `c:`, the `a:` and `c:` slices have same size and offset 
  530: values. The offset must start at a track boundary (an increment of sectors 
  531: matching the sectors/track value in the disk label). On sparc64 however, `c:` 
  532: represents the entire NetBSD partition in the Sun disk label and `d:` is not 
  533: reserved. Also note that sparc64's `c:` and `a:` require no offset from the 
  534: beginning of the disk, however if they should need to be, the offset must start 
  535: at a cylinder boundary (an increment of sectors matching the sectors/cylinder 
  536: value).
  537: 
  538: ### Initializing the RAID Device
  539: 
  540: Next we create the configuration file for the RAID set/volume. Traditionally,
  541: RAIDframe configuration files belong in `/etc` and would be read and initialized
  542: at boot time, however, because we are creating a bootable RAID volume, the
  543: configuration data will actually be written into the RAID volume using the
  544: *auto-configure* feature. Therefore, files are needed only during the initial
  545: setup and should not reside in `/etc`.
  546: 
  547:     # vi /var/tmp/raid0.conf
  548:     START array
  549:     1 2 0
  550:     
  551:     START disks
  552:     absent
  553:     /dev/wd1a
  554:     
  555:     START layout
  556:     128 1 1 1
  557:     
  558:     START queue
  559:     fifo 100
  560: 
  561: Note that `absent` means a non-existing disk. This will allow us to establish
  562: the RAID volume with a bogus component that we will substitute for Disk0/wd0 at
  563: a later time.
  564: 
  565: Next we configure the RAID device and initialize the serial number to something
  566: unique. In this example we use a "YYYYMMDD*`Revision`*" scheme. The format you
  567: choose is entirely at your discretion, however the scheme you choose should
  568: ensure that no two RAID sets use the same serial number at the same time.
  569: 
  570: After that we initialize the RAID set for the first time, safely ignoring the
  571: errors regarding the bogus component.
  572: 
  573:     # raidctl -v -C /var/tmp/raid0.conf raid0
  574:     Ignoring missing component at column 0
  575:     raid0: Component absent being configured at col: 0
  576:              Column: 0 Num Columns: 0
  577:              Version: 0 Serial Number: 0 Mod Counter: 0
  578:              Clean: No Status: 0
  579:     Number of columns do not match for: absent
  580:     absent is not clean!
  581:     raid0: Component /dev/wd1a being configured at col: 1
  582:              Column: 0 Num Columns: 0
  583:              Version: 0 Serial Number: 0 Mod Counter: 0
  584:              Clean: No Status: 0
  585:     Column out of alignment for: /dev/wd1a
  586:     Number of columns do not match for: /dev/wd1a
  587:     /dev/wd1a is not clean!
  588:     raid0: There were fatal errors
  589:     raid0: Fatal errors being ignored.
  590:     raid0: RAID Level 1
  591:     raid0: Components: component0[**FAILED**] /dev/wd1a
  592:     raid0: Total Sectors: 19540864 (9541 MB)
  593:     # raidctl -v -I 2009122601 raid0
  594:     # raidctl -v -i raid0
  595:     Initiating re-write of parity
  596:     raid0: Error re-writing parity!
  597:     Parity Re-write status:
  598:     
  599:     # tail -1 /var/log/messages
  600:     Dec 26 00:00:30  /netbsd: raid0: Error re-writing parity!
  601:     # raidctl -v -s raid0
  602:     Components:
  603:               component0: failed
  604:                /dev/wd1a: optimal
  605:     No spares.
  606:     component0 status is: failed.  Skipping label.
  607:     Component label for /dev/wd1a:
  608:        Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
  609:        Version: 2, Serial Number: 2009122601, Mod Counter: 7
  610:        Clean: No, Status: 0
  611:        sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
  612:        Queue size: 100, blocksize: 512, numBlocks: 19540864
  613:        RAID Level: 1
  614:        Autoconfig: No
  615:        Root partition: No
  616:        Last configured as: raid0
  617:     Parity status: DIRTY
  618:     Reconstruction is 100% complete.
  619:     Parity Re-write is 100% complete.
  620:     Copyback is 100% complete.
  621: 
  622: ### Setting up Filesystems
  623: 
  624: *Caution*: The root filesystem must begin at sector 0 of the RAID device. If
  625: not, the primary boot loader will be unable to find the secondary boot loader.
  626: 
  627: The RAID device is now configured and available. The RAID device is a pseudo
  628: disk-device. It will be created with a default disk label. You must now
  629: determine the proper sizes for disklabel slices for your production environment.
  630: For purposes of simplification in this example, our system will have 8.5
  631: gigabytes dedicated to `/` as `/dev/raid0a` and the rest allocated to `swap`
  632: as `/dev/raid0b`.
  633: 
  634: *Caution*: This is an unrealistic disk layout for a production server; the
  635: NetBSD Guide can expand on proper partitioning technique. See [Installing
  636: NetBSD: Preliminary considerations and preparations*](inst).
  637: 
  638: *Note*: 1 GB is 2\*1024\*1024=2097152 blocks (1 block is 512 bytes, or
  639: 0.5 kilobytes). Despite what the underlying hardware composing a RAID set is,
  640: the RAID pseudo disk will always have 512 bytes/sector.
  641: 
  642: *Note*: In our example, the space allocated to the underlying `a:` slice
  643: composing the RAID set differed between i386 and sparc64, therefore the total
  644: sectors of the RAID volumes differs:
  645: 
  646: On i386:
  647: 
  648:      # disklabel -r -e -I raid0
  649:     type: RAID
  650:     disk: raid
  651:     label: fictitious
  652:     flags:
  653:     bytes/sector: 512
  654:     sectors/track: 128
  655:     tracks/cylinder: 8
  656:     sectors/cylinder: 1024
  657:     cylinders: 19082
  658:     total sectors: 19540864
  659:     rpm: 3600
  660:     interleave: 1
  661:     trackskew: 0
  662:     cylinderskew: 0
  663:     headswitch: 0 # microseconds
  664:     track-to-track seek: 0 # microseconds
  665:     drivedata: 0
  666:     
  667:     #        size    offset     fstype [fsize bsize cpg/sgs]
  668:      a:  19015680         0     4.2BSD      0     0     0  # (Cyl.      0 - 18569)
  669:      b:    525184  19015680       swap                     # (Cyl.  18570 - 19082*)
  670:      d:  19540864         0     unused      0     0        # (Cyl.      0 - 19082*)
  671: 
  672: On sparc64:
  673: 
  674:     # disklabel -r -e -I raid0
  675:     [...snip...]
  676:     total sectors: 19539968
  677:     [...snip...]
  678:     3 partitions:
  679:     #        size    offset     fstype [fsize bsize cpg/sgs]
  680:      a:  19251200         0     4.2BSD      0     0     0  # (Cyl.      0 -  18799)
  681:      b:    288768  19251200       swap                     # (Cyl.  18800 -  19081)
  682:      c:  19539968         0     unused      0     0        # (Cyl.      0 -  19081)
  683: 
  684: Next, format the newly created `/` partition as a 4.2BSD FFSv1 File System:
  685: 
  686:     # newfs -O 1 /dev/rraid0a
  687:     /dev/rraid0a: 9285.0MB (19015680 sectors) block size 16384, fragment size 2048
  688:             using 51 cylinder groups of 182.06MB, 11652 blks, 23040 inodes.
  689:     super-block backups (for fsck -b #) at:
  690:     32, 372896, 745760, 1118624, 1491488, 1864352, 2237216, 2610080, 2982944,
  691:     ...............................................................................
  692:     
  693:     # fsck -fy /dev/rraid0a
  694:     ** /dev/rraid0a
  695:     ** File system is already clean
  696:     ** Last Mounted on
  697:     ** Phase 1 - Check Blocks and Sizes
  698:     ** Phase 2 - Check Pathnames
  699:     ** Phase 3 - Check Connectivity
  700:     ** Phase 4 - Check Reference Counts
  701:     ** Phase 5 - Check Cyl groups
  702:     1 files, 1 used, 4679654 free (14 frags, 584955 blocks, 0.0% fragmentation)
  703: 
  704: ### Migrating System to RAID
  705: 
  706: The new RAID filesystems are now ready for use. We mount them under `/mnt` and
  707: copy all files from the old system. This can be done using
  708: [[!template id=man name="dump" section="8"]] or
  709: [[!template id=man name="pax" section="1"]].
  710: 
  711:     # mount /dev/raid0a /mnt
  712:     # df -h /mnt
  713:     Filesystem        Size       Used      Avail %Cap Mounted on
  714:     /dev/raid0a       8.9G       2.0K       8.5G   0% /mnt
  715:     # cd /; pax -v -X -rw -pe . /mnt
  716:     [...snip...]
  717: 
  718: The NetBSD install now exists on the RAID filesystem. We need to fix the
  719: mount-points in the new copy of `/etc/fstab` or the system will not come up
  720: correctly. Replace instances of `wd0` with `raid0`.
  721: 
  722:     # mv /mnt/etc/fstab /mnt/etc/fstab.old
  723:     # sed 's/wd0/raid0/g' /mnt/etc/fstab.old > /mnt/etc/fstab
  724: 
  725: The swap should be unconfigured upon shutdown to avoid parity errors on the RAID
  726: device. This can be done with a simple, one-line setting in `/etc/rc.conf`.
  727: 
  728:     # vi /mnt/etc/rc.conf
  729:     swapoff=YES
  730: 
  731: Next, the boot loader must be installed on Disk1/wd1. Failure to install the
  732: loader on Disk1/wd1 will render the system un-bootable if Disk0/wd0 fails. You
  733: should hope your system won't have to reboot when wd0 fails.
  734: 
  735: *Tip*: Because the BIOS/CMOS menus in many i386 based systems are misleading
  736: with regard to device boot order. I highly recommend utilizing the `-o
  737: timeout=X` option supported by the i386 1st stage boot loader. Setup unique
  738: values for each disk as a point of reference so that you can easily determine
  739: from which disk the system is booting.
  740: 
  741: *Caution*: Although it may seem logical to install the 1st stage boot block into
  742: `/dev/rwd1{c,d}` (which is historically correct with NetBSD 1.6.x
  743: [[!template id=man name="installboot" section="8"]]
  744: , this is no longer the case. If you make this mistake, the boot sector will
  745: become irrecoverably damaged and you will need to start the process over again.
  746: 
  747: On i386, install the boot loader into `/dev/rwd1a`:
  748: 
  749:     # /usr/sbin/installboot -o timeout=30 -v /dev/rwd1a /usr/mdec/bootxx_ffsv1
  750:     File system:         /dev/rwd1a
  751:     Primary bootstrap:   /usr/mdec/bootxx_ffsv1
  752:     Ignoring PBR with invalid magic in sector 0 of `/dev/rwd1a'
  753:     Boot options:        timeout 30, flags 0, speed 9600, ioaddr 0, console pc
  754: 
  755: On sparc64, install the boot loader into `/dev/rwd1a` as well, however the `-o`
  756: flag is unsupported (and un-needed thanks to OpenBoot):
  757: 
  758:     # /usr/sbin/installboot -v /dev/rwd1a /usr/mdec/bootblk
  759:     File system:         /dev/rwd1a
  760:     Primary bootstrap:   /usr/mdec/bootblk
  761:     Bootstrap start sector: 1
  762:     Bootstrap byte count:   5140
  763:     Writing bootstrap
  764: 
  765: Finally the RAID set must be made auto-configurable and the system should be
  766: rebooted. After the reboot everything is mounted from the RAID devices.
  767: 
  768:     # raidctl -v -A root raid0
  769:     raid0: Autoconfigure: Yes
  770:     raid0: Root: Yes
  771:     # tail -2 /var/log/messages
  772:     raid0: New autoconfig value is: 1
  773:     raid0: New rootpartition value is: 1
  774:     # raidctl -v -s raid0
  775:     [...snip...]
  776:        Autoconfig: Yes
  777:        Root partition: Yes
  778:        Last configured as: raid0
  779:     [...snip...]
  780:     # shutdown -r now
  781: 
  782: *Warning*: Always use
  783: [[!template id=man name="shutdown" section="8"]]
  784: when shutting down. Never simply use
  785: [[!template id=man name="reboot" section="8"]].
  786: [[!template id=man name="reboot" section="8"]]
  787: will not properly run shutdown RC scripts and will not safely disable swap. This
  788: will cause dirty parity at every reboot.
  789: 
  790: ### The first boot with RAID
  791: 
  792: At this point, temporarily configure your system to boot Disk1/wd1. See notes in
  793: [[Testing Boot Blocks|guide/rf#adding-text-boot]] for details on this process.
  794: The system should boot now and all filesystems should be on the RAID devices.
  795: The RAID will be functional with a single component, however the set is not
  796: fully functional because the bogus drive (wd9) has failed.
  797: 
  798:     # egrep -i "raid|root" /var/run/dmesg.boot
  799:     raid0: RAID Level 1
  800:     raid0: Components: component0[**FAILED**] /dev/wd1a
  801:     raid0: Total Sectors: 19540864 (9541 MB)
  802:     boot device: raid0
  803:     root on raid0a dumps on raid0b
  804:     root file system type: ffs
  805:     
  806:     # df -h
  807:     Filesystem    Size     Used     Avail Capacity  Mounted on
  808:     /dev/raid0a   8.9G     196M      8.3G     2%    /
  809:     kernfs        1.0K     1.0K        0B   100%    /kern
  810:     
  811:     # swapctl -l
  812:     Device      1K-blocks     Used    Avail Capacity  Priority
  813:     /dev/raid0b    262592        0   262592     0%    0
  814:     # raidctl -s raid0
  815:     Components:
  816:               component0: failed
  817:                /dev/wd1a: optimal
  818:     No spares.
  819:     component0 status is: failed.  Skipping label.
  820:     Component label for /dev/wd1a:
  821:        Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
  822:        Version: 2, Serial Number: 2009122601, Mod Counter: 65
  823:        Clean: No, Status: 0
  824:        sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
  825:        Queue size: 100, blocksize: 512, numBlocks: 19540864
  826:        RAID Level: 1
  827:        Autoconfig: Yes
  828:        Root partition: Yes
  829:        Last configured as: raid0
  830:     Parity status: DIRTY
  831:     Reconstruction is 100% complete.
  832:     Parity Re-write is 100% complete.
  833:     Copyback is 100% complete.
  834: 
  835: ### Adding Disk0/wd0 to RAID
  836: 
  837: We will now add Disk0/wd0 as a component of the RAID. This will destroy the
  838: original file system structure. On i386, the MBR disklabel will be unaffected
  839: (remember we copied wd0's label to wd1 anyway) , therefore there is no need to
  840: "zero" Disk0/wd0. However, we need to relabel Disk0/wd0 to have an identical
  841: NetBSD disklabel layout as Disk1/wd1. Then we add Disk0/wd0 as "hot-spare" to
  842: the RAID set and initiate the parity reconstruction for all RAID devices,
  843: effectively bringing Disk0/wd0 into the RAID-1 set and "syncing up" both disks.
  844: 
  845:     # disklabel -r wd1 > /tmp/disklabel.wd1
  846:     # disklabel -R -r wd0 /tmp/disklabel.wd1
  847: 
  848: As a last-minute sanity check, you might want to use
  849: [[!template id=man name="diff" section="1"]] to
  850: ensure that the disklabels of Disk0/wd0 match Disk1/wd1. You should also backup
  851: these files for reference in the event of an emergency.
  852: 
  853:     # disklabel -r wd0 > /tmp/disklabel.wd0
  854:     # disklabel -r wd1 > /tmp/disklabel.wd1
  855:     # diff /tmp/disklabel.wd0 /tmp/disklabel.wd1
  856:     # fdisk /dev/rwd0 > /tmp/fdisk.wd0
  857:     # fdisk /dev/rwd1 > /tmp/fdisk.wd1
  858:     # diff /tmp/fdisk.wd0 /tmp/fdisk.wd1
  859:     # mkdir /root/RFbackup
  860:     # cp -p /tmp/{disklabel,fdisk}* /root/RFbackup
  861: 
  862: Once you are sure, add Disk0/wd0 as a spare component, and start reconstruction:
  863: 
  864:     # raidctl -v -a /dev/wd0a raid0
  865:     /netbsd: Warning: truncating spare disk /dev/wd0a to 241254528 blocks
  866:     # raidctl -v -s raid0
  867:     Components:
  868:               component0: failed
  869:                /dev/wd1a: optimal
  870:     Spares:
  871:                /dev/wd0a: spare
  872:     [...snip...]
  873:     # raidctl -F component0 raid0
  874:     RECON: initiating reconstruction on col 0 -> spare at col 2
  875:      11% |****                                   | ETA:    04:26 \
  876: 
  877: Depending on the speed of your hardware, the reconstruction time will vary. You
  878: may wish to watch it on another terminal (note that you can interrupt
  879: `raidctl -S` any time without stopping the synchronisation):
  880: 
  881:     # raidctl -S raid0
  882:     Reconstruction is 0% complete.
  883:     Parity Re-write is 100% complete.
  884:     Copyback is 100% complete.
  885:     Reconstruction status:
  886:       17% |******                                 | ETA: 03:08 -
  887: 
  888: After reconstruction, both disks should be *optimal*.
  889: 
  890:     # tail -f /var/log/messages
  891:     raid0: Reconstruction of disk at col 0 completed
  892:     raid0: Recon time was 1290.625033 seconds, accumulated XOR time was 0 us (0.000000)
  893:     raid0:  (start time 1093407069 sec 145393 usec, end time 1093408359 sec 770426 usec)
  894:     raid0: Total head-sep stall count was 0
  895:     raid0: 305318 recon event waits, 1 recon delays
  896:     raid0: 1093407069060000 max exec ticks
  897:     
  898:     # raidctl -v -s raid0
  899:     Components:
  900:                component0: spared
  901:                /dev/wd1a: optimal
  902:     Spares:
  903:          /dev/wd0a: used_spare
  904:          [...snip...]
  905: 
  906: When the reconstruction is finished we need to install the boot loader on the
  907: Disk0/wd0. On i386, install the boot loader into `/dev/rwd0a`:
  908: 
  909:     # /usr/sbin/installboot -o timeout=15 -v /dev/rwd0a /usr/mdec/bootxx_ffsv1
  910:     File system:         /dev/rwd0a
  911:     Primary bootstrap:   /usr/mdec/bootxx_ffsv1
  912:     Boot options:        timeout 15, flags 0, speed 9600, ioaddr 0, console pc
  913: 
  914: On sparc64:
  915: 
  916:     # /usr/sbin/installboot -v /dev/rwd0a /usr/mdec/bootblk
  917:     File system:         /dev/rwd0a
  918:     Primary bootstrap:   /usr/mdec/bootblk
  919:     Bootstrap start sector: 1
  920:     Bootstrap byte count:   5140
  921:     Writing bootstrap
  922: 
  923: And finally, reboot the machine one last time before proceeding. This is
  924: required to migrate Disk0/wd0 from status "used\_spare" as "Component0" to state
  925: "optimal". Refer to notes in the next section regarding verification of clean
  926: parity after each reboot.
  927: 
  928:     # shutdown -r now
  929: 
  930: ### Testing Boot Blocks
  931: 
  932: At this point, you need to ensure that your system's hardware can properly boot
  933: using the boot blocks on either disk. On i386, this is a hardware-dependent
  934: process that may be done via your motherboard CMOS/BIOS menu or your controller
  935: card's configuration menu.
  936: 
  937: On i386, use the menu system on your machine to set the boot device order /
  938: priority to Disk1/wd1 before Disk0/wd0. The examples here depict a generic Award
  939: 
  940: BIOS.
  941: 
  942: ![Award BIOS i386 Boot Disk1/wd1](/guide/images/raidframe_awardbios2.png)  
  943: **Award BIOS i386 Boot Disk1/wd1**
  944: 
  945: Save changes and exit:
  946: 
  947:     >> NetBSD/i386 BIOS Boot, Revision 5.2 (from NetBSD 5.0.2)
  948:     >> (builds@b7, Sun Feb 7 00:30:50 UTC 2010)
  949:     >> Memory: 639/130048 k
  950:     Press return to boot now, any other key for boot menu
  951:     booting hd0a:netbsd - starting in 30
  952: 
  953: You can determine that the BIOS is reading Disk1/wd1 because the timeout of th
  954: 
  955: boot loader is 30 seconds instead of 15. After the reboot, re-enter the BIOS an
  956: configure the drive boot order back to the default:
  957: 
  958: ![Award BIOS i386 Boot Disk0/wd0](/guide/images/raidframe_awardbios1.png)  
  959: **Award BIOS i386 Boot Disk0/wd0**
  960: 
  961: Save changes and exit:
  962: 
  963:     >> NetBSD/i386 BIOS Boot, Revision 5.2 (from NetBSD 5.0.2)
  964:     >> Memory: 639/130048 k
  965:     Press return to boot now, any other key for boot menu
  966:     booting hd0a:netbsd - starting in 15
  967: 
  968: Notice how your custom kernel detects controller/bus/drive assignments
  969: independent of what the BIOS assigns as the boot disk. This is the expected
  970: behavior.
  971: 
  972: On sparc64, use the Sun OpenBoot **devalias** to confirm that both disks are bootable:
  973: 
  974:     Sun Ultra 5/10 UPA/PCI (UltraSPARC-IIi 400MHz), No Keyboard
  975:     OpenBoot 3.15, 128 MB memory installed, Serial #nnnnnnnn.
  976:     Ethernet address 8:0:20:a5:d1:3b, Host ID: nnnnnnnn.
  977:     
  978:     ok devalias
  979:     [...snip...]
  980:     cdrom /pci@1f,0/pci@1,1/ide@3/cdrom@2,0:f
  981:     disk /pci@1f,0/pci@1,1/ide@3/disk@0,0
  982:     disk3 /pci@1f,0/pci@1,1/ide@3/disk@3,0
  983:     disk2 /pci@1f,0/pci@1,1/ide@3/disk@2,0
  984:     disk1 /pci@1f,0/pci@1,1/ide@3/disk@1,0
  985:     disk0 /pci@1f,0/pci@1,1/ide@3/disk@0,0
  986:     [...snip...]
  987:     
  988:     ok boot disk0 netbsd
  989:     Initializing Memory [...]
  990:     Boot device /pci/pci/ide@3/disk@0,0 File and args: netbsd
  991:     NetBSD IEEE 1275 Bootblock
  992:     >> NetBSD/sparc64 OpenFirmware Boot, Revision 1.13
  993:     >> (builds@b7.netbsd.org, Wed Jul 29 23:43:42 UTC 2009)
  994:     loadfile: reading header
  995:     elf64_exec: Booting [...]
  996:     symbols @ [....]
  997:      Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
  998:          2006, 2007, 2008, 2009
  999:          The NetBSD Foundation, Inc.  All rights reserved.
 1000:      Copyright (c) 1982, 1986, 1989, 1991, 1993
 1001:          The Regents of the University of California.  All rights reserved.
 1002:     [...snip...]
 1003: 
 1004: And the second disk:
 1005: 
 1006:     ok boot disk2 netbsd
 1007:     Initializing Memory [...]
 1008:     Boot device /pci/pci/ide@3/disk@2,0: File and args:netbsd
 1009:     NetBSD IEEE 1275 Bootblock
 1010:     >> NetBSD/sparc64 OpenFirmware Boot, Revision 1.13
 1011:     >> (builds@b7.netbsd.org, Wed Jul 29 23:43:42 UTC 2009)
 1012:     loadfile: reading header
 1013:     elf64_exec: Booting [...]
 1014:     symbols @ [....]
 1015:      Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
 1016:          2006, 2007, 2008, 2009
 1017:          The NetBSD Foundation, Inc.  All rights reserved.
 1018:      Copyright (c) 1982, 1986, 1989, 1991, 1993
 1019:          The Regents of the University of California.  All rights reserved.
 1020:     [...snip...]
 1021: 
 1022: At each boot, the following should appear in the NetBSD kernel
 1023: [[!template id=man name="dmesg" section="8"]] :
 1024: 
 1025:     Kernelized RAIDframe activated
 1026:     raid0: RAID Level 1
 1027:     raid0: Components: /dev/wd0a /dev/wd1a
 1028:     raid0: Total Sectors: 19540864 (9541 MB)
 1029:     boot device: raid0
 1030:     root on raid0a dumps on raid0b
 1031:     root file system type: ffs
 1032: 
 1033: Once you are certain that both disks are bootable, verify the RAID parity is
 1034: clean after each reboot:
 1035: 
 1036:     # raidctl -v -s raid0
 1037:     Components:
 1038:               /dev/wd0a: optimal
 1039:               /dev/wd1a: optimal
 1040:     No spares.
 1041:     [...snip...]
 1042:     Component label for /dev/wd0a:
 1043:        Row: 0, Column: 0, Num Rows: 1, Num Columns: 2
 1044:        Version: 2, Serial Number: 2009122601, Mod Counter: 67
 1045:        Clean: No, Status: 0
 1046:        sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
 1047:        Queue size: 100, blocksize: 512, numBlocks: 19540864
 1048:        RAID Level: 1
 1049:        Autoconfig: Yes
 1050:        Root partition: Yes
 1051:        Last configured as: raid0
 1052:     Component label for /dev/wd1a:
 1053:        Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
 1054:        Version: 2, Serial Number: 2009122601, Mod Counter: 67
 1055:        Clean: No, Status: 0
 1056:        sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
 1057:        Queue size: 100, blocksize: 512, numBlocks: 19540864
 1058:        RAID Level: 1
 1059:        Autoconfig: Yes
 1060:        Root partition: Yes
 1061:        Last configured as: raid0
 1062:     Parity status: clean
 1063:     Reconstruction is 100% complete.
 1064:     Parity Re-write is 100% complete.
 1065:     Copyback is 100% complete.
 1066: 

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb