File:  [NetBSD Developer Wiki] / wikisrc / guide / lvm.mdwn
Revision 1.4: download - view: text, annotated - select for diffs
Fri Jun 19 19:18:31 2015 UTC (5 years ago) by plunky
Branches: MAIN
CVS tags: HEAD
replace direct links to manpages on netbsd.gw.com with templates

    1: **Contents**
    2: 
    3: [[!toc levels=3]]
    4: 
    5: # NetBSD Logical Volume Manager (LVM) configuration
    6: 
    7: NetBSD LVM allows logical volume management on NetBSD systems, with a well known
    8: user interface, which is the same as the Linux LVM2 tools.
    9: 
   10: NetBSD LVM is built on Linux lvm2tools and libdevmapper, together with a
   11: BSD-licensed device-mapper kernel driver specially written for NetBSD.
   12: 
   13: The LVM driver allows the user to manage available disk space effectively and
   14: efficiently. Disk space from several disks, and partitions, known as *Physical
   15: Volumes*, can be added to *Volume Groups*, which is the pool of available disk
   16: space for *Logical Partitions* aka Logical Volumes.
   17: 
   18: Logical Volumes can be grown and shrunk at will using the LVM utilities.
   19: 
   20: The basic building block is the Physical Volume. This is a disk, or a part of a
   21: disk, which is used to store data.
   22: 
   23: Physical Volumes are aggregated together to make Volume Groups, or VGs.
   24: Typically, Volume Groups are used to aggregate storage for the same functional
   25: unit. Typical Volume Groups could thus be named `Audio`, `Multimedia` or
   26: `Documents`. By segregating storage requirements in this functional way, the
   27: same type of resilience and redundancy is applied to the whole of the functional
   28: unit.
   29: 
   30: The steps required to setup a LVM are as follows:
   31: 
   32:  1. Install physical media
   33:  2. Configure kernel support
   34:  3. Configure system, install tools
   35:  4. *Optional step*
   36:     Disklabel each volume member of the LVM
   37:  5. Initialize the LVM disk devices
   38:  6. Create a volume group from initialized disks
   39:  7. Create Logical volume from created Volume group
   40:  8. Create a filesystem on the new LV device
   41:  9. Mount the LV filesystem
   42: 
   43: 
   44: This example features a LVM setup on NetBSD/i386.
   45: 
   46: ## Anatomy of NetBSD Logical Volume Manager
   47: 
   48: ![Anatomy of Logical Volume Management](/guide/images/lvm.png)
   49: 
   50:  1. **Volume Group**
   51: 	The Volume Group is a disk space pool from which the user creates Logical
   52: 	Volumes and to which Physical Volumes can be added. It is the basic
   53: 	administration unit of the NetBSD LVM implementation.
   54: 
   55:  2. **Physical Volume**
   56: 	A physical volume is the basic unit in a LVM structure. Every PV consists of
   57: 	small disk space chunks called Physical Extends. Every Volume Group must
   58: 	have at least one PV. A PV can be created on hard disks or hard disk like
   59: 	devices such as raid, ccd, or cgd device.
   60: 
   61:  3. **Logical Volume**
   62: 	The Logical Volume is a logical partition created from disk space assigned
   63: 	to the Volume Group. LV can be newfsed and mounted as any other pseudo-disk
   64: 	device. Lvm tools use functionality exported by the device-mapper driver in
   65: 	the kernel to create the LV.
   66: 
   67:  4. **Physical Extents**
   68: 	Each physical volume is divided chunks of disk space. The default size is
   69: 	4MB. Every LV size is rounded by PE size. The LV is created by mapping
   70: 	Logical Extends in the LV to Physical extends in a Volume group.
   71: 
   72:  5. **Logical Extents**
   73: 	Each logical volume is split into chunks of disk space, known as logical
   74: 	extents. The extent size is the same for all logical volumes in the volume
   75: 	group.
   76: 
   77:  6. **Physical Extents mapping**
   78: 	Every LV consists of *LEs* mapped to *PEs* mapped by a target mapping.
   79: 	Currently, the following mappings are defined.
   80: 
   81:     * **Linear Mapping**
   82:       will linearly assign range of PEs to LEs.
   83: 	  For example it can map 100 PEs from PV 1 to LV 1 and another 100 PEs from
   84: 	  PV 0.
   85: 
   86:     * **Stripe Mapping**
   87: 	  will interleave the chunks of the logical extents across a number of
   88: 	  physical volumes.
   89: 
   90:  7. **Snapshots**
   91: 
   92: 	A facility provided by LVM is 'snapshots'. Whilst in standard NetBSD, the
   93: 	[[!template id=man name="fss" section="3"]] driver
   94: 	can be used to provide file system snapshots at a file system level, the
   95: 	snapshot facility in the LVM allows the administrator to create a logical
   96: 	block device which presents an exact copy of a logical volume, frozen at
   97: 	some point in time. This facility does require that the snapshot be made at
   98: 	a time when the data on the logical volume is in a consistent state.
   99: 
  100: 	*Warning*: Snapshot feature is not fully implemented in LVM in NetBSD and
  101: 	should not be used in production.
  102: 
  103: 
  104: ## Install physical media
  105: 
  106: This step is at your own discretion, depending on your platform and the hardware
  107: at your disposal. LVM can be used with disklabel partitions or even with
  108: standard partitions created with fdisk.
  109: 
  110: From my `dmesg`:
  111: 
  112:     Disk #1:
  113:     probe(esp0:0:0): max sync rate 10.00MB/s
  114:     sd0 at scsibus0 target 0 lun 0: <SEAGATE, ST32430N SUN2.1G, 0444> SCSI2 0/direct fixed
  115:     sd0: 2049 MB, 3992 cyl, 9 head, 116 sec, 512 bytes/sect x 4197405 sectors
  116:     
  117:     Disk #2
  118:     probe(esp0:1:0): max sync rate 10.00MB/s
  119:     sd1 at scsibus0 target 1 lun 0: <SEAGATE, ST32430N SUN2.1G, 0444> SCSI2 0/direct fixed
  120:     sd1: 2049 MB, 3992 cyl, 9 head, 116 sec, 512 bytes/sect x 4197405 sectors
  121:     
  122:     Disk #3
  123:     probe(esp0:2:0): max sync rate 10.00MB/s
  124:     sd2 at scsibus0 target 2 lun 0: <SEAGATE, ST11200N SUN1.05, 9500> SCSI2 0/direct fixed
  125:     sd2: 1005 MB, 1872 cyl, 15 head, 73 sec, 512 bytes/sect x 2059140 sectors
  126:     
  127:     Disk #4
  128:     probe(esp0:3:0): max sync rate 10.00MB/s
  129:     sd3 at scsibus0 target 3 lun 0: <SEAGATE, ST11200N SUN1.05, 8808 > SCSI2 0
  130:     sd3: 1005 MB, 1872 cyl, 15 head, 73 sec, 512 bytes/sect x 2059140 sectors
  131: 
  132: ## Configure Kernel Support
  133: 
  134: The following kernel configuration directive is needed to provide LVM device
  135: support. It is provided as a kernel module, so that no extra modifications need
  136: be made to a standard NetBSD kernel. The dm driver is provided as a kernel
  137: module, it first appeared in the NetBSD 6.0 release.
  138: 
  139: If your system doesn't use modules you can enable dm driver in NetBSD by adding
  140: this line to kernel configuration file. This will add device-mapper driver to
  141: kernel and link it as statically linked module.
  142: 
  143:     pseudo-device dm
  144: 
  145: If you do not want to rebuild your kernel only because of LVM support you can
  146: use dm kernel module. The devmapper kernel module can be loaded on your system.
  147: To get the current status of modules in the kernel, the tool
  148: [[!template id=man name="modstat" section="8"]]
  149: is used:
  150: 
  151:     vm1# modstat
  152:     NAME            CLASS   SOURCE  REFS    SIZE    REQUIRES
  153:     cd9660          vfs     filesys 0       21442   -
  154:     coredump        misc    filesys 1       2814    -
  155:     exec_elf32      misc    filesys 0       6713    coredump
  156:     exec_script     misc    filesys 0       1091    -
  157:     ffs             vfs     boot    0       163040  -
  158:     kernfs          vfs     filesys 0       10201   -
  159:     ptyfs           vfs     filesys 0       7852    -
  160: 
  161: You can use
  162: [[!template id=man name="modload" section="8"]] to
  163: load the dm kernel module by issueing `modload dm`:
  164: 
  165:     vm1# modstat
  166:     NAME            CLASS   SOURCE  REFS    SIZE    REQUIRES
  167:     cd9660          vfs     filesys 0       21442   -
  168:     coredump        misc    filesys 1       2814    -
  169:     dm              misc    filesys 0       14448   -
  170:     exec_elf32      misc    filesys 0       6713    coredump
  171:     exec_script     misc    filesys 0       1091    -
  172:     ffs             vfs     boot    0       163040  -
  173:     kernfs          vfs     filesys 0       10201   -
  174:     ptyfs           vfs     filesys 0       7852    -
  175: 
  176: ## Configure LVM on a NetBSD system
  177: 
  178: For using LVM you have to install lvm2tools and libdevmapper to NetBSD system.
  179: These tools and libraries are not enabled as default.
  180: 
  181: To enable the build of LVM tools, set `MKLVM=yes` in the `/etc/mk.conf` or
  182: `MAKECONF` file.
  183: 
  184: ## Disklabel each physical volume member of the LVM
  185: 
  186: Each physical volume disk in LVM will need a special file system established. In
  187: this example, I will need to disklabel:
  188: 
  189:     /dev/rsd0d
  190:     /dev/rsd1d
  191:     /dev/rsd2d
  192:     /dev/rsd3d
  193: 
  194: It should be borne in mind that it is possible to use the NetBSD vnd driver to
  195: make standard file system space appear in the system as a disk device.
  196: 
  197: *Note*: Always remember to disklabel the character device, not the block device,
  198: in `/dev/r{s,w}d*`
  199: 
  200: *Note*: On all platforms except i386 where `d` partition is used for this, the
  201: `c` slice is symbolic of the entire NetBSD partition and is reserved.
  202: 
  203: You will probably want to remove any pre-existing disklabels on the physical
  204: volume disks in the LVM. This can be accomplished in one of two ways with the
  205: [[!template id=man name="dd" section="1"]] command:
  206: 
  207:     # dd if=/dev/zero of=/dev/rsd0d bs=8k count=1
  208:     # dd if=/dev/zero of=/dev/rsd1d bs=8k count=1
  209:     # dd if=/dev/zero of=/dev/rsd2d bs=8k count=1
  210:     # dd if=/dev/zero of=/dev/rsd3d bs=8k count=1
  211: 
  212: If your port uses a MBR (Master Boot Record) to partition the disks so that the
  213: NetBSD partitions are only part of the overall disk, and other OSs like Windows
  214: or Linux use other parts, you can void the MBR and all partitions on disk by
  215: using the command:
  216: 
  217:     # dd if=/dev/zero of=/dev/rsd0d bs=8k count=1
  218:     # dd if=/dev/zero of=/dev/rsd1d bs=8k count=1
  219:     # dd if=/dev/zero of=/dev/rsd2d bs=8k count=1
  220:     # dd if=/dev/zero of=/dev/rsd3d bs=8k count=1
  221: 
  222: This will make all data on the entire disk inaccessible. Note that the entire
  223: disk is slice `d` on i386 (and some other ports), and `c` elsewhere (e.g. on
  224: sparc). See the `kern.rawpartition` sysctl - `3` means `d`, `2` means `c`.
  225: 
  226: The default disklabel for the disk will look similar to this:
  227: 
  228:     # disklabel -r sd0
  229:     [...snip...]
  230:     bytes/sector: 512
  231:     sectors/track: 63
  232:     tracks/cylinder: 16
  233:     sectors/cylinder: 1008
  234:     cylinders: 207
  235:     total sectors: 208896
  236:     rpm: 3600
  237:     interleave: 1
  238:     trackskew: 0
  239:     cylinderskew: 0
  240:     headswitch: 0           # microseconds
  241:     track-to-track seek: 0  # microseconds
  242:     drivedata: 0
  243:     
  244:     4 partitions:
  245:     #        size    offset     fstype [fsize bsize cpg/sgs]
  246:     a:    208896         0     4.2BSD      0     0     0  # (Cyl.      0 -    207*)
  247:     d:    208896         0     unused      0     0        # (Cyl.      0 -    207*)
  248: 
  249: You will need to create one *slice* on the NetBSD partition of the disk that
  250: consumes the entire partition. The slice must begin at least two sectors after
  251: end of disklabel part of disk. On i386 it is `sector` 63. Therefore, the `size`
  252: value should be `total sectors` minus 2x `sectors`. Edit your disklabel
  253: accordingly:
  254: 
  255:     # disklabel -e sd0
  256: 
  257: *Note*: The offset of a slice of type `4.2BSD` must be a multiple of the
  258: `sectors` value.
  259: 
  260: *Note*: Be sure to `export EDITOR=[path to your favorite editor]` before
  261: editing the disklabels.
  262: 
  263: *Note*: The slice must be fstype `4.2BSD`.
  264: 
  265: Because there will only be one slice on this partition, you can recycle the `d`
  266: slice (normally reserved for symbolic uses). Change your disklabel to the
  267: following:
  268: 
  269:     3 partitions:
  270:     #        size   offset    fstype   [fsize bsize   cpg]
  271:      d:  4197403       65      4.2BSD                       # (Cyl. 1 - 4020*)
  272: 
  273: Optionally you can setup a slice other than `d` to use, simply adjust
  274: accordingly below:
  275: 
  276:     3 partitions:
  277:     #        size   offset    fstype   [fsize bsize   cpg]
  278:      a:  4197403       65      4.2BSD                       # (Cyl. 1 - 4020*)
  279:      c:  4197405       0       unused     1024  8192        # (Cyl. 0 - 4020*)
  280: 
  281: Be sure to write the label when you have completed. Disklabel will object to
  282: your disklabel and prompt you to re-edit if it does not pass its sanity checks.
  283: 
  284: ## Create Physical Volumes
  285: 
  286: Once all disks are properly labeled, you will need to create physical volume on
  287: them. Every partition/disk added to LVM must have physical volume header on
  288: start of it. All informations, like Volume group where Physical volume belongs
  289: are stored in this header.
  290: 
  291:     # lvm pvcreate /dev/rwd1[ad]
  292: 
  293: Status of physical volume can be viewed with the
  294: [[!template id=man name="pvdisplay" section="8"]]
  295: command.
  296: 
  297:     # lvm pvdisplay
  298: 
  299: ## Create Volume Group
  300: 
  301: Once all disks are properly labeled with physical volume header, volume group
  302: must be created from them. Volume Group is pool of PEs from which administrator
  303: can create Logical Volumes *partitions*.
  304: 
  305:     # lvm vgcreate vg0 /dev/rwd1[ad]
  306: 
  307:  * `vg0` is name of Volume Group
  308:  * `/dev/rwd1[ad]` is Physical Volume
  309: 
  310: The volume group can be later extended/reduced with
  311: [[!template id=man name="vgextend" section="8"]]
  312: and
  313: [[!template id=man name="vgreduce" section="8"]]
  314: commands. These commands add physical volumes to VG.
  315: 
  316:     # lvm vgextend vg0 /dev/rwd1[ad]
  317:     # lvm vgreduce vg0 /dev/rwd1[ad]
  318: 
  319: The status of Volume group can be viewed with the
  320: [[!template id=man name="vgdisplay" section="8"]]
  321: command.
  322: 
  323:     # lvm vgdisplay vg0
  324: 
  325: ## Create Logical Volume
  326: 
  327: Once the volume group was created, the administrator can create `logical
  328: partitions` volumes.
  329: 
  330:     # lvm lvcreate  -L 20M -n lv1 vg0
  331: 
  332:  * `vg0` is the name of the volume group
  333:  * `-L 20M` is the size of the logical volume
  334:  * `-n lv1` is the name of the logical volume
  335: 
  336: 
  337: Logical Volume can be later extended/reduced with the
  338: [[!template id=man name="lvextend" section="8"]]
  339: and
  340: [[!template id=man name="lvreduce" section="8"]]
  341: commands.
  342: 
  343:     # lvm lvextend -L+20M /dev/vg0/lv1
  344:     # lvm lvreduce -L-20M /dev/vg0/lv1
  345: 
  346: *Note*: To shrink a lv partition you have to shrink filesystem before. See the
  347: manpage of
  348: [[!template id=man name="resize_ffs" section="8"]]
  349: for how to do this.
  350: 
  351: The status of Logical Volume can be viewed with the
  352: [[!template id=man name="lvdisplay" section="8"]]
  353: command
  354: 
  355:     # lvm lvdisplay lv0/lv1
  356: 
  357: After reboot, all functional LVs in the defined volume group can be activated
  358: with the command:
  359: 
  360:     # lvm vgchange -a y
  361: 
  362: ## Example: LVM with Volume groups located on raid1
  363: 
  364: The motivation for using raid 1 disk as physical volume disk for Volume Group is
  365: disk reliability. With the PV on raid 1 disk it is possible to use Logical
  366: Volumes even after disk failure.
  367: 
  368: ### Loading Device-Mapper driver
  369: 
  370: Before we can start work with the LVM tools, we have to be sure that NetBSD dm
  371: driver was properly compiled into the kernel or loaded as a module. The easiest
  372: way to find out if we have dm driver available is to run `modstat`. For more
  373: information, see [[Configure Kernel Support
  374: chapter|guide/lvm#configuring-kernel]].
  375: 
  376: ### Preparing raid1 installation
  377: 
  378: Following the example raid configuration defined in [[Raid 1
  379: configuration|guide/rf#configuring-raid]], the user will set up a clean raid1
  380: disk device with 2 disks in a mirror mode.
  381: 
  382: #### Example RAID1 configuration
  383: 
  384:     # vi /var/tmp/raid0.conf
  385:     START array
  386:     1 2 0
  387:     
  388:     START disks
  389:     /dev/wd2a
  390:     /dev/wd1a
  391:     
  392:     START layout
  393:     128 1 1 1
  394:     
  395:     START queue
  396:     fifo 100
  397: 
  398:     # raidctl -v -C /var/tmp/raid0.conf raid0
  399:     raid0: Component /dev/wd1a being configured at col: 0
  400:     Column: 0 Num Columns: 0
  401:     Version: 0 Serial Number: 0 Mod Counter: 0
  402:     Clean: No Status: 0
  403:     Column out of alignment for: /dev/wd2a
  404:     Number of columns do not match for: /dev/wd2a
  405:     /dev/wd2a is not clean!
  406:     raid0: Component /dev/wd1a being configured at col: 1
  407:     Column: 0 Num Columns: 0
  408:     Version: 0 Serial Number: 0 Mod Counter: 0
  409:     Clean: No Status: 0
  410:     Column out of alignment for: /dev/wd1a
  411:     Number of columns do not match for: /dev/wd1a
  412:     /dev/wd1a is not clean!
  413:     raid0: There were fatal errors
  414:     raid0: Fatal errors being ignored.
  415:     raid0: RAID Level 1
  416:     raid0: Components: /dev/wd2a /dev/wd1a
  417:     raid0: Total Sectors: 19540864 (9541 MB)
  418:     # raidctl -v -I 2004082401 raid0
  419:     # raidctl -v -i raid0
  420:     Initiating re-write of parity
  421:     # tail -1 /var/log/messages
  422:     raid0: Error re-writing parity!
  423:     # raidctl -v -s raid0
  424:     Components:
  425:     /dev/wd2a: optimal
  426:     /dev/wd1a: optimal
  427:     No spares.
  428:     Component label for /dev/wd1a:
  429:     Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
  430:     Version: 2, Serial Number: 2004082401, Mod Counter: 7
  431:     Clean: No, Status: 0
  432:     sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
  433:     Queue size: 100, blocksize: 512, numBlocks: 19540864
  434:     RAID Level: 1
  435:     Autoconfig: No
  436:     Root partition: No
  437:     Last configured as: raid0
  438:     Parity status: DIRTY
  439:     Reconstruction is 100% complete.
  440:     Parity Re-write is 100% complete.
  441:     Copyback is 100% complete.
  442:     Component label for /dev/wd2a:
  443:     Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
  444:     Version: 2, Serial Number: 2004082401, Mod Counter: 7
  445:     Clean: No, Status: 0
  446:     sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
  447:     Queue size: 100, blocksize: 512, numBlocks: 19540864
  448:     RAID Level: 1
  449:     Autoconfig: No
  450:     Root partition: No
  451:     Last configured as: raid0
  452:     Parity status: DIRTY
  453:     Reconstruction is 100% complete.
  454:     Parity Re-write is 100% complete.
  455:     Copyback is 100% complete.
  456:             
  457: 
  458: After setting up the raid we need to create a disklabel on the raid disk.
  459: 
  460: On i386:
  461: 
  462:      # disklabel -r -e -I raid0
  463:     type: RAID
  464:     disk: raid
  465:     label: fictitious
  466:     flags:
  467:     bytes/sector: 512
  468:     sectors/track: 128
  469:     tracks/cylinder: 8
  470:     sectors/cylinder: 1024
  471:     cylinders: 19082
  472:     total sectors: 19540864
  473:     rpm: 3600
  474:     interleave: 1
  475:     trackskew: 0
  476:     cylinderskew: 0
  477:     headswitch: 0 # microseconds
  478:     track-to-track seek: 0 # microseconds
  479:     drivedata: 0
  480:     
  481:     #        size    offset     fstype [fsize bsize cpg/sgs]
  482:     a:  19540789        65     4.2BSD      0     0     0  # (Cyl.      0 - 18569)
  483:     d:  19540864         0     unused      0     0        # (Cyl.      0 - 19082*)
  484: 
  485: On sparc64:
  486: 
  487:     # disklabel -r -e -I raid0
  488:     [...snip...]
  489:     total sectors: 19539968
  490:     [...snip...]
  491:     2 partitions:
  492:     #        size    offset     fstype [fsize bsize cpg/sgs]
  493:     a:  19540793        65     4.2BSD      0     0     0  # (Cyl.      0 -  18799)
  494:     c:  19539968         0     unused      0     0        # (Cyl.      0 -  19081)
  495: 
  496: Partitions should be created with offset 65, because sectors < than 65 sector
  497: are marked as readonly and therefore can't be rewriten.
  498: 
  499: ### Creating PV, VG on raid disk
  500: 
  501: Physical volumes can be created on any block device (i.e., disklike), and on any
  502: partition on them. Thus, we can use the `a`, `d`, or on sparc64 `c` partitions.
  503: The PV will label the selected partition as LVM-used and add the needed
  504: metainformation for the LVM to it.
  505: 
  506: The PV is created on the character disk device, as all other disk operations in
  507: NetBSD:
  508: 
  509:     # lvm pvcreate /dev/rraid0a
  510: 
  511: For our example purpose I will create the `vg00` Volume Group. The first
  512: parameter of `vgcreate` is the name of the volume group, and the second is the
  513: PV created on the raid. If you later found out that the volume group size is not
  514: sufficient, and you need more space, you can extend it with `vgextend`:
  515: 
  516:     # lvm vgcreate vg00 /dev/rraid0a
  517:     # lvm vgextend vg00 /dev/rraid1a
  518: 
  519: **Warning**: If you add a non-raid PV to your Volume Group, your data is not
  520: safe anymore. Therefore you should add a raid based PV to VG if you want to keep
  521: your data safe.
  522: 
  523: ### Creating LVs from VG located on raid disk
  524: 
  525: For our example purpose we will create Logical Volume named lv0. If you later
  526: found that LV size is not sufficient for you can add it with `lvresize`.
  527: 
  528: *Note*: You have to resize filesystem, when you have resized LV. Otherwise you
  529: will not see any filesystem change when you mount LV.
  530: 
  531: **Warning**: Shrinking of ffs file system is not supported in NetBSD be aware
  532: that. If you want to play with file system shrinking you must shrink it before
  533: you shrink LV.  
  534: This means that the `-L-*` option is not available in NetBSD.
  535: 
  536:     # lvm lvcreate -n lv0 -L 2G vg00
  537:     # lvm lvresize -L+2G vg00/lv0
  538: 
  539: All lv device nodes are created in the `/dev/vg00/` directory. File system can
  540: be create on LV with this command. After file system creation LV can be mounted
  541: to system.
  542: 
  543:     # newfs -O2 /dev/vg00/rlv0
  544:     # mount /dev/vg00/lv0 /mnt/
  545: 
  546: ### Integration of LV's in to the system
  547: 
  548: For Proper LVM integration you have to enable lvm rc.d script, which detect LVs
  549: during boot and enables them. You have to add entry for Logical Volume to the
  550: `/etc/fstab` file.
  551: 
  552:     # cat /etc/rc.conf
  553:     [snip]
  554:     lvm=yes
  555: 
  556:     # cat /etc/fstab
  557:     /dev/wd0a               /       ffs     rw               1 1
  558:     /dev/vg00/lv0           /lv0/   ffs     rw               1 1
  559:     [snip]
  560: 

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb