--- wikisrc/users/haad/Attic/porting_zfs.mdwn 2009/10/23 23:47:20 1.2 +++ wikisrc/users/haad/Attic/porting_zfs.mdwn 2009/10/23 23:58:32 1.4 @@ -20,21 +20,35 @@ User need to After loading modules user can create zpool(zfs version of volume manager) and manage zfs file systems on it. -zpool create {name} {type} {device} +zpool create {zpool name} {type} {device} -type: - * mirror - * raidz - * raidz2 - * default is normal linear allocation +where type is: + +* mirror +* raidz +* raidz2 +* default is normal linear allocation device is blod device on netbsd /dev/sd0a for example. zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions. +With zpool created we can create zfs filesystems or zvols(zfs logical volume disks) + +zfs create -V {size} tank/{zvol name} creates zvol with {zvol name} from ZPOOL called tank + +Logical disk is created in + +/dev/zvol/rdsk/{zpool name}/{zvol name} + +/dev/zvol/dsk/{zpool name}/{zvol name} + +zfs create tank/{zfs name} create zfs filesystem on a zpool called tank ## Administration +After creating ZVOLS and filesystem they are saved in a /etc/zfs/zpool.cache file and loaded after nextzfs module load. + --- # 3. Known Bugs @@ -69,6 +83,9 @@ because zio_root is basicaly zio_null. zfs umount panic is caused by using FreeBSD approach in zfs_reclaim. +* vnode fsync bug + + I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC. ## Functional bugs * Snapshots