Diff for /wikisrc/guide/lvm.mdwn between versions 1.1 and 1.2

version 1.1, 2013/03/08 01:58:32 version 1.2, 2013/03/08 23:22:04
Line 1 Line 1
 # NetBSD Logical Volume Manager (LVM) configuration  # NetBSD Logical Volume Manager (LVM) configuration
   
 NetBSD LVM allows logical volume management on NetBSD systems, with a well known   NetBSD LVM allows logical volume management on NetBSD systems, with a well known
 user interface, which is the same as the Linux LVM2 tools.  user interface, which is the same as the Linux LVM2 tools.
   
 NetBSD LVM is built on Linux lvm2tools and libdevmapper, together with a   NetBSD LVM is built on Linux lvm2tools and libdevmapper, together with a
 BSD-licensed device-mapper kernel driver specially written for NetBSD.  BSD-licensed device-mapper kernel driver specially written for NetBSD.
   
 The LVM driver allows the user to manage available disk space effectively and   The LVM driver allows the user to manage available disk space effectively and
 efficiently. Disk space from several disks, and partitions, known as *Physical   efficiently. Disk space from several disks, and partitions, known as *Physical
 Volumes*, can be added to *Volume Groups*, which is the pool of available disk   Volumes*, can be added to *Volume Groups*, which is the pool of available disk
 space for *Logical Partitions* aka Logical Volumes.  space for *Logical Partitions* aka Logical Volumes.
   
 Logical Volumes can be grown and shrunk at will using the LVM utilities.  Logical Volumes can be grown and shrunk at will using the LVM utilities.
   
 The basic building block is the Physical Volume. This is a disk, or a part of a   The basic building block is the Physical Volume. This is a disk, or a part of a
 disk, which is used to store data.  disk, which is used to store data.
   
 Physical Volumes are aggregated together to make Volume Groups, or VGs.   Physical Volumes are aggregated together to make Volume Groups, or VGs.
 Typically, Volume Groups are used to aggregate storage for the same functional   Typically, Volume Groups are used to aggregate storage for the same functional
 unit. Typical Volume Groups could thus be named `Audio`, `Multimedia` or   unit. Typical Volume Groups could thus be named `Audio`, `Multimedia` or
 `Documents`. By segregating storage requirements in this functional way, the   `Documents`. By segregating storage requirements in this functional way, the
 same type of resilience and redundancy is applied to the whole of the functional   same type of resilience and redundancy is applied to the whole of the functional
 unit.  unit.
   
 The steps required to setup a LVM are as follows:  The steps required to setup a LVM are as follows:
Line 44  This example features a LVM setup on Net Line 44  This example features a LVM setup on Net
 ![Anatomy of Logical Volume Management](/guide/images/lvm.png)  ![Anatomy of Logical Volume Management](/guide/images/lvm.png)
   
  1. **Volume Group**   1. **Volume Group**
         The Volume Group is a disk space pool from which the user creates Logical           The Volume Group is a disk space pool from which the user creates Logical
         Volumes and to which Physical Volumes can be added. It is the basic           Volumes and to which Physical Volumes can be added. It is the basic
         administration unit of the NetBSD LVM implementation.          administration unit of the NetBSD LVM implementation.
   
  2. **Physical Volume**   2. **Physical Volume**
         A physical volume is the basic unit in a LVM structure. Every PV consists of           A physical volume is the basic unit in a LVM structure. Every PV consists of
         small disk space chunks called Physical Extends. Every Volume Group must           small disk space chunks called Physical Extends. Every Volume Group must
         have at least one PV. A PV can be created on hard disks or hard disk like           have at least one PV. A PV can be created on hard disks or hard disk like
         devices such as raid, ccd, or cgd device.          devices such as raid, ccd, or cgd device.
   
  3. **Logical Volume**   3. **Logical Volume**
         The Logical Volume is a logical partition created from disk space assigned           The Logical Volume is a logical partition created from disk space assigned
         to the Volume Group. LV can be newfsed and mounted as any other pseudo-disk           to the Volume Group. LV can be newfsed and mounted as any other pseudo-disk
         device. Lvm tools use functionality exported by the device-mapper driver in           device. Lvm tools use functionality exported by the device-mapper driver in
         the kernel to create the LV.          the kernel to create the LV.
   
  4. **Physical Extents**   4. **Physical Extents**
         Each physical volume is divided chunks of disk space. The default size is           Each physical volume is divided chunks of disk space. The default size is
         4MB. Every LV size is rounded by PE size. The LV is created by mapping           4MB. Every LV size is rounded by PE size. The LV is created by mapping
         Logical Extends in the LV to Physical extends in a Volume group.          Logical Extends in the LV to Physical extends in a Volume group.
   
  5. **Logical Extents**   5. **Logical Extents**
         Each logical volume is split into chunks of disk space, known as logical           Each logical volume is split into chunks of disk space, known as logical
         extents. The extent size is the same for all logical volumes in the volume           extents. The extent size is the same for all logical volumes in the volume
         group.          group.
   
  6. **Physical Extents mapping**   6. **Physical Extents mapping**
         Every LV consists of *LEs* mapped to *PEs* mapped by a target mapping.           Every LV consists of *LEs* mapped to *PEs* mapped by a target mapping.
         Currently, the following mappings are defined.          Currently, the following mappings are defined.
   
     * **Linear Mapping**      * **Linear Mapping**
       will linearly assign range of PEs to LEs.        will linearly assign range of PEs to LEs.
           For example it can map 100 PEs from PV 1 to LV 1 and another 100 PEs from             For example it can map 100 PEs from PV 1 to LV 1 and another 100 PEs from
           PV 0.            PV 0.
   
     * **Stripe Mapping**      * **Stripe Mapping**
           will interleave the chunks of the logical extents across a number of             will interleave the chunks of the logical extents across a number of
           physical volumes.            physical volumes.
   
  7. **Snapshots**   7. **Snapshots**
   
         A facility provided by LVM is 'snapshots'. Whilst in standard NetBSD, the           A facility provided by LVM is 'snapshots'. Whilst in standard NetBSD, the
         [fss(3)](http://netbsd.gw.com/cgi-bin/man-cgi?ffs+3+NetBSD-current) driver           [fss(3)](http://netbsd.gw.com/cgi-bin/man-cgi?ffs+3+NetBSD-current) driver
         can be used to provide file system snapshots at a file system level, the           can be used to provide file system snapshots at a file system level, the
         snapshot facility in the LVM allows the administrator to create a logical           snapshot facility in the LVM allows the administrator to create a logical
         block device which presents an exact copy of a logical volume, frozen at           block device which presents an exact copy of a logical volume, frozen at
         some point in time. This facility does require that the snapshot be made at           some point in time. This facility does require that the snapshot be made at
         a time when the data on the logical volume is in a consistent state.          a time when the data on the logical volume is in a consistent state.
   
         *Warning*: Snapshot feature is not fully implemented in LVM in NetBSD and           *Warning*: Snapshot feature is not fully implemented in LVM in NetBSD and
         should not be used in production.          should not be used in production.
   
   
 ## Install physical media  ## Install physical media
   
 This step is at your own discretion, depending on your platform and the hardware   This step is at your own discretion, depending on your platform and the hardware
 at your disposal. LVM can be used with disklabel partitions or even with   at your disposal. LVM can be used with disklabel partitions or even with
 standard partitions created with fdisk.  standard partitions created with fdisk.
   
 From my `dmesg`:  From my `dmesg`:
Line 127  From my `dmesg`: Line 127  From my `dmesg`:
   
 ## Configure Kernel Support  ## Configure Kernel Support
   
 The following kernel configuration directive is needed to provide LVM device   The following kernel configuration directive is needed to provide LVM device
 support. It is provided as a kernel module, so that no extra modifications need   support. It is provided as a kernel module, so that no extra modifications need
 be made to a standard NetBSD kernel. The dm driver is provided as a kernel   be made to a standard NetBSD kernel. The dm driver is provided as a kernel
 module, it first appeared in the NetBSD 6.0 release.  module, it first appeared in the NetBSD 6.0 release.
   
 If your system doesn't use modules you can enable dm driver in NetBSD by adding   If your system doesn't use modules you can enable dm driver in NetBSD by adding
 this line to kernel configuration file. This will add device-mapper driver to   this line to kernel configuration file. This will add device-mapper driver to
 kernel and link it as statically linked module.  kernel and link it as statically linked module.
   
     pseudo-device dm      pseudo-device dm
   
 If you do not want to rebuild your kernel only because of LVM support you can   If you do not want to rebuild your kernel only because of LVM support you can
 use dm kernel module. The devmapper kernel module can be loaded on your system.   use dm kernel module. The devmapper kernel module can be loaded on your system.
 To get the current status of modules in the kernel, the tool  To get the current status of modules in the kernel, the tool
 [modstat(8)](http://netbsd.gw.com/cgi-bin/man-cgi?modstat+8+NetBSD-current)  [modstat(8)](http://netbsd.gw.com/cgi-bin/man-cgi?modstat+8+NetBSD-current)
 is used:  is used:
Line 155  is used: Line 155  is used:
     ptyfs           vfs     filesys 0       7852    -      ptyfs           vfs     filesys 0       7852    -
   
 You can use  You can use
 [modload(8)](http://netbsd.gw.com/cgi-bin/man-cgi?modload+8+NetBSD-current) to   [modload(8)](http://netbsd.gw.com/cgi-bin/man-cgi?modload+8+NetBSD-current) to
 load the dm kernel module by issueing `modload dm`:  load the dm kernel module by issueing `modload dm`:
   
     vm1# modstat      vm1# modstat
Line 171  load the dm kernel module by issueing `m Line 171  load the dm kernel module by issueing `m
   
 ## Configure LVM on a NetBSD system  ## Configure LVM on a NetBSD system
   
 For using LVM you have to install lvm2tools and libdevmapper to NetBSD system.   For using LVM you have to install lvm2tools and libdevmapper to NetBSD system.
 These tools and libraries are not enabled as default.  These tools and libraries are not enabled as default.
   
 To enable the build of LVM tools, set `MKLVM=yes` in the `/etc/mk.conf` or   To enable the build of LVM tools, set `MKLVM=yes` in the `/etc/mk.conf` or
 `MAKECONF` file.  `MAKECONF` file.
   
 ## Disklabel each physical volume member of the LVM  ## Disklabel each physical volume member of the LVM
   
 Each physical volume disk in LVM will need a special file system established. In   Each physical volume disk in LVM will need a special file system established. In
 this example, I will need to disklabel:  this example, I will need to disklabel:
   
     /dev/rsd0d      /dev/rsd0d
Line 187  this example, I will need to disklabel: Line 187  this example, I will need to disklabel:
     /dev/rsd2d      /dev/rsd2d
     /dev/rsd3d      /dev/rsd3d
   
 It should be borne in mind that it is possible to use the NetBSD vnd driver to   It should be borne in mind that it is possible to use the NetBSD vnd driver to
 make standard file system space appear in the system as a disk device.  make standard file system space appear in the system as a disk device.
   
 *Note*: Always remember to disklabel the character device, not the block device,   *Note*: Always remember to disklabel the character device, not the block device,
 in `/dev/r{s,w}d*`  in `/dev/r{s,w}d*`
   
 *Note*: On all platforms except i386 where `d` partition is used for this, the   *Note*: On all platforms except i386 where `d` partition is used for this, the
 `c` slice is symbolic of the entire NetBSD partition and is reserved.  `c` slice is symbolic of the entire NetBSD partition and is reserved.
   
 You will probably want to remove any pre-existing disklabels on the physical   You will probably want to remove any pre-existing disklabels on the physical
 volume disks in the LVM. This can be accomplished in one of two ways with the   volume disks in the LVM. This can be accomplished in one of two ways with the
 [dd(1)](http://netbsd.gw.com/cgi-bin/man-cgi?dd+1+NetBSD-5.0.1+i386) command:  [dd(1)](http://netbsd.gw.com/cgi-bin/man-cgi?dd+1+NetBSD-5.0.1+i386) command:
   
     # dd if=/dev/zero of=/dev/rsd0d bs=8k count=1      # dd if=/dev/zero of=/dev/rsd0d bs=8k count=1
Line 205  volume disks in the LVM. This can be acc Line 205  volume disks in the LVM. This can be acc
     # dd if=/dev/zero of=/dev/rsd2d bs=8k count=1      # dd if=/dev/zero of=/dev/rsd2d bs=8k count=1
     # dd if=/dev/zero of=/dev/rsd3d bs=8k count=1      # dd if=/dev/zero of=/dev/rsd3d bs=8k count=1
   
 If your port uses a MBR (Master Boot Record) to partition the disks so that the   If your port uses a MBR (Master Boot Record) to partition the disks so that the
 NetBSD partitions are only part of the overall disk, and other OSs like Windows   NetBSD partitions are only part of the overall disk, and other OSs like Windows
 or Linux use other parts, you can void the MBR and all partitions on disk by   or Linux use other parts, you can void the MBR and all partitions on disk by
 using the command:  using the command:
   
     # dd if=/dev/zero of=/dev/rsd0d bs=8k count=1      # dd if=/dev/zero of=/dev/rsd0d bs=8k count=1
     # dd if=/dev/zero of=/dev/rsd1d bs=8k count=1      # dd if=/dev/zero of=/dev/rsd1d bs=8k count=1
     # dd if=/dev/zero of=/dev/rsd2d bs=8k count=1      # dd if=/dev/zero of=/dev/rsd2d bs=8k count=1
     # dd if=/dev/zero of=/dev/rsd3d bs=8k count=1       # dd if=/dev/zero of=/dev/rsd3d bs=8k count=1
   
 This will make all data on the entire disk inaccessible. Note that the entire   This will make all data on the entire disk inaccessible. Note that the entire
 disk is slice `d` on i386 (and some other ports), and `c` elsewhere (e.g. on   disk is slice `d` on i386 (and some other ports), and `c` elsewhere (e.g. on
 sparc). See the `kern.rawpartition` sysctl - `3` means `d`, `2` means `c`.  sparc). See the `kern.rawpartition` sysctl - `3` means `d`, `2` means `c`.
   
 The default disklabel for the disk will look similar to this:  The default disklabel for the disk will look similar to this:
Line 235  The default disklabel for the disk will  Line 235  The default disklabel for the disk will 
     cylinderskew: 0      cylinderskew: 0
     headswitch: 0           # microseconds      headswitch: 0           # microseconds
     track-to-track seek: 0  # microseconds      track-to-track seek: 0  # microseconds
     drivedata: 0       drivedata: 0
           
     4 partitions:      4 partitions:
     #        size    offset     fstype [fsize bsize cpg/sgs]      #        size    offset     fstype [fsize bsize cpg/sgs]
     a:    208896         0     4.2BSD      0     0     0  # (Cyl.      0 -    207*)      a:    208896         0     4.2BSD      0     0     0  # (Cyl.      0 -    207*)
     d:    208896         0     unused      0     0        # (Cyl.      0 -    207*)      d:    208896         0     unused      0     0        # (Cyl.      0 -    207*)
   
 You will need to create one *slice* on the NetBSD partition of the disk that   You will need to create one *slice* on the NetBSD partition of the disk that
 consumes the entire partition. The slice must begin at least two sectors after   consumes the entire partition. The slice must begin at least two sectors after
 end of disklabel part of disk. On i386 it is `sector` 63. Therefore, the `size`   end of disklabel part of disk. On i386 it is `sector` 63. Therefore, the `size`
 value should be `total sectors` minus 2x `sectors`. Edit your disklabel   value should be `total sectors` minus 2x `sectors`. Edit your disklabel
 accordingly:  accordingly:
   
     # disklabel -e sd0      # disklabel -e sd0
   
 *Note*: The offset of a slice of type `4.2BSD` must be a multiple of the   *Note*: The offset of a slice of type `4.2BSD` must be a multiple of the
 `sectors` value.  `sectors` value.
   
 *Note*: Be sure to `export EDITOR=[path to your favorite editor]` before   *Note*: Be sure to `export EDITOR=[path to your favorite editor]` before
 editing the disklabels.  editing the disklabels.
   
 *Note*: The slice must be fstype `4.2BSD`.  *Note*: The slice must be fstype `4.2BSD`.
   
 Because there will only be one slice on this partition, you can recycle the `d`   Because there will only be one slice on this partition, you can recycle the `d`
 slice (normally reserved for symbolic uses). Change your disklabel to the   slice (normally reserved for symbolic uses). Change your disklabel to the
 following:  following:
   
     3 partitions:      3 partitions:
     #        size   offset    fstype   [fsize bsize   cpg]      #        size   offset    fstype   [fsize bsize   cpg]
      d:  4197403       65      4.2BSD                       # (Cyl. 1 - 4020*)       d:  4197403       65      4.2BSD                       # (Cyl. 1 - 4020*)
   
 Optionally you can setup a slice other than `d` to use, simply adjust   Optionally you can setup a slice other than `d` to use, simply adjust
 accordingly below:  accordingly below:
   
     3 partitions:      3 partitions:
Line 274  accordingly below: Line 274  accordingly below:
      a:  4197403       65      4.2BSD                       # (Cyl. 1 - 4020*)       a:  4197403       65      4.2BSD                       # (Cyl. 1 - 4020*)
      c:  4197405       0       unused     1024  8192        # (Cyl. 0 - 4020*)       c:  4197405       0       unused     1024  8192        # (Cyl. 0 - 4020*)
   
 Be sure to write the label when you have completed. Disklabel will object to   Be sure to write the label when you have completed. Disklabel will object to
 your disklabel and prompt you to re-edit if it does not pass its sanity checks.  your disklabel and prompt you to re-edit if it does not pass its sanity checks.
   
 ## Create Physical Volumes  ## Create Physical Volumes
   
 Once all disks are properly labeled, you will need to create physical volume on   Once all disks are properly labeled, you will need to create physical volume on
 them. Every partition/disk added to LVM must have physical volume header on   them. Every partition/disk added to LVM must have physical volume header on
 start of it. All informations, like Volume group where Physical volume belongs   start of it. All informations, like Volume group where Physical volume belongs
 are stored in this header.  are stored in this header.
   
     # lvm pvcreate /dev/rwd1[ad]      # lvm pvcreate /dev/rwd1[ad]
Line 294  command. Line 294  command.
   
 ## Create Volume Group  ## Create Volume Group
   
 Once all disks are properly labeled with physical volume header, volume group   Once all disks are properly labeled with physical volume header, volume group
 must be created from them. Volume Group is pool of PEs from which administrator   must be created from them. Volume Group is pool of PEs from which administrator
 can create Logical Volumes *partitions*.  can create Logical Volumes *partitions*.
   
     # lvm vgcreate vg0 /dev/rwd1[ad]      # lvm vgcreate vg0 /dev/rwd1[ad]
Line 303  can create Logical Volumes *partitions*. Line 303  can create Logical Volumes *partitions*.
  * `vg0` is name of Volume Group   * `vg0` is name of Volume Group
  * `/dev/rwd1[ad]` is Physical Volume   * `/dev/rwd1[ad]` is Physical Volume
   
 The volume group can be later extended/reduced with   The volume group can be later extended/reduced with
 [vgextend(8)](http://netbsd.gw.com/cgi-bin/man-cgi?vgextend+8+NetBSD-current)  [vgextend(8)](http://netbsd.gw.com/cgi-bin/man-cgi?vgextend+8+NetBSD-current)
 and  and
 [vgreduce(8)](http://netbsd.gw.com/cgi-bin/man-cgi?vgreduce+8+NetBSD-current)  [vgreduce(8)](http://netbsd.gw.com/cgi-bin/man-cgi?vgreduce+8+NetBSD-current)
Line 320  command. Line 320  command.
   
 ## Create Logical Volume  ## Create Logical Volume
   
 Once the volume group was created, the administrator can create `logical   Once the volume group was created, the administrator can create `logical
 partitions` volumes.  partitions` volumes.
   
     # lvm lvcreate  -L 20M -n lv1 vg0      # lvm lvcreate  -L 20M -n lv1 vg0
Line 339  commands. Line 339  commands.
     # lvm lvextend -L+20M /dev/vg0/lv1      # lvm lvextend -L+20M /dev/vg0/lv1
     # lvm lvreduce -L-20M /dev/vg0/lv1      # lvm lvreduce -L-20M /dev/vg0/lv1
   
 *Note*: To shrink a lv partition you have to shrink filesystem before. See the   *Note*: To shrink a lv partition you have to shrink filesystem before. See the
 manpage of   manpage of
 [resize_ffs(8)](http://netbsd.gw.com/cgi-bin/man-cgi?resize_ffs+8+NetBSD-current)   [resize_ffs(8)](http://netbsd.gw.com/cgi-bin/man-cgi?resize_ffs+8+NetBSD-current)
 for how to do this.  for how to do this.
   
 The status of Logical Volume can be viewed with the  The status of Logical Volume can be viewed with the
Line 350  command Line 350  command
   
     # lvm lvdisplay lv0/lv1      # lvm lvdisplay lv0/lv1
   
 After reboot, all functional LVs in the defined volume group can be activated   After reboot, all functional LVs in the defined volume group can be activated
 with the command:  with the command:
   
     # lvm vgchange -a y      # lvm vgchange -a y
   
 ## Example: LVM with Volume groups located on raid1  ## Example: LVM with Volume groups located on raid1
   
 The motivation for using raid 1 disk as physical volume disk for Volume Group is   The motivation for using raid 1 disk as physical volume disk for Volume Group is
 disk reliability. With the PV on raid 1 disk it is possible to use Logical   disk reliability. With the PV on raid 1 disk it is possible to use Logical
 Volumes even after disk failure.  Volumes even after disk failure.
   
 ### Loading Device-Mapper driver  ### Loading Device-Mapper driver
   
 Before we can start work with the LVM tools, we have to be sure that NetBSD dm   Before we can start work with the LVM tools, we have to be sure that NetBSD dm
 driver was properly compiled into the kernel or loaded as a module. The easiest   driver was properly compiled into the kernel or loaded as a module. The easiest
 way to find out if we have dm driver available is to run `modstat`. For more   way to find out if we have dm driver available is to run `modstat`. For more
 information, see [[Configure Kernel Support   information, see [[Configure Kernel Support
 chapter|guide/lvm#configuring-kernel]].  chapter|guide/lvm#configuring-kernel]].
   
 ### Preparing raid1 installation  ### Preparing raid1 installation
   
 Following the example raid configuration defined in [[Raid 1   Following the example raid configuration defined in [[Raid 1
 configuration|guide/rf#configuring-raid]], the user will set up a clean raid1   configuration|guide/rf#configuring-raid]], the user will set up a clean raid1
 disk device with 2 disks in a mirror mode.  disk device with 2 disks in a mirror mode.
   
 #### Example RAID1 configuration  #### Example RAID1 configuration
Line 489  On sparc64: Line 489  On sparc64:
     a:  19540793        65     4.2BSD      0     0     0  # (Cyl.      0 -  18799)      a:  19540793        65     4.2BSD      0     0     0  # (Cyl.      0 -  18799)
     c:  19539968         0     unused      0     0        # (Cyl.      0 -  19081)      c:  19539968         0     unused      0     0        # (Cyl.      0 -  19081)
   
 Partitions should be created with offset 65, because sectors < than 65 sector   Partitions should be created with offset 65, because sectors < than 65 sector
 are marked as readonly and therefore can't be rewriten.  are marked as readonly and therefore can't be rewriten.
   
 ### Creating PV, VG on raid disk  ### Creating PV, VG on raid disk
   
 Physical volumes can be created on any block device (i.e., disklike), and on any   Physical volumes can be created on any block device (i.e., disklike), and on any
 partition on them. Thus, we can use the `a`, `d`, or on sparc64 `c` partitions.   partition on them. Thus, we can use the `a`, `d`, or on sparc64 `c` partitions.
 The PV will label the selected partition as LVM-used and add the needed   The PV will label the selected partition as LVM-used and add the needed
 metainformation for the LVM to it.  metainformation for the LVM to it.
   
 The PV is created on the character disk device, as all other disk operations in   The PV is created on the character disk device, as all other disk operations in
 NetBSD:  NetBSD:
   
     # lvm pvcreate /dev/rraid0a       # lvm pvcreate /dev/rraid0a
   
 For our example purpose I will create the `vg00` Volume Group. The first   For our example purpose I will create the `vg00` Volume Group. The first
 parameter of `vgcreate` is the name of the volume group, and the second is the   parameter of `vgcreate` is the name of the volume group, and the second is the
 PV created on the raid. If you later found out that the volume group size is not   PV created on the raid. If you later found out that the volume group size is not
 sufficient, and you need more space, you can extend it with `vgextend`:  sufficient, and you need more space, you can extend it with `vgextend`:
   
     # lvm vgcreate vg00 /dev/rraid0a      # lvm vgcreate vg00 /dev/rraid0a
     # lvm vgextend vg00 /dev/rraid1a      # lvm vgextend vg00 /dev/rraid1a
   
 **Warning**: If you add a non-raid PV to your Volume Group, your data is not   **Warning**: If you add a non-raid PV to your Volume Group, your data is not
 safe anymore. Therefore you should add a raid based PV to VG if you want to keep   safe anymore. Therefore you should add a raid based PV to VG if you want to keep
 your data safe.  your data safe.
   
 ### Creating LVs from VG located on raid disk  ### Creating LVs from VG located on raid disk
   
 For our example purpose we will create Logical Volume named lv0. If you later   For our example purpose we will create Logical Volume named lv0. If you later
 found that LV size is not sufficient for you can add it with `lvresize`.  found that LV size is not sufficient for you can add it with `lvresize`.
   
 *Note*: You have to resize filesystem, when you have resized LV. Otherwise you   *Note*: You have to resize filesystem, when you have resized LV. Otherwise you
 will not see any filesystem change when you mount LV.  will not see any filesystem change when you mount LV.
   
 **Warning**: Shrinking of ffs file system is not supported in NetBSD be aware   **Warning**: Shrinking of ffs file system is not supported in NetBSD be aware
 that. If you want to play with file system shrinking you must shrink it before   that. If you want to play with file system shrinking you must shrink it before
 you shrink LV.    you shrink LV.  
 This means that the `-L-*` option is not available in NetBSD.  This means that the `-L-*` option is not available in NetBSD.
   
     # lvm lvcreate -n lv0 -L 2G vg00      # lvm lvcreate -n lv0 -L 2G vg00
     # lvm lvresize -L+2G vg00/lv0      # lvm lvresize -L+2G vg00/lv0
   
 All lv device nodes are created in the `/dev/vg00/` directory. File system can   All lv device nodes are created in the `/dev/vg00/` directory. File system can
 be create on LV with this command. After file system creation LV can be mounted   be create on LV with this command. After file system creation LV can be mounted
 to system.  to system.
   
     # newfs -O2 /dev/vg00/rlv0      # newfs -O2 /dev/vg00/rlv0
Line 541  to system. Line 541  to system.
   
 ### Integration of LV's in to the system  ### Integration of LV's in to the system
   
 For Proper LVM integration you have to enable lvm rc.d script, which detect LVs   For Proper LVM integration you have to enable lvm rc.d script, which detect LVs
 during boot and enables them. You have to add entry for Logical Volume to the   during boot and enables them. You have to add entry for Logical Volume to the
 `/etc/fstab` file.  `/etc/fstab` file.
   
     # cat /etc/rc.conf      # cat /etc/rc.conf

Removed from v.1.1  
changed lines
  Added in v.1.2


CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb