--- wikisrc/users/haad/Attic/porting_zfs.mdwn 2009/10/23 23:47:20 1.2 +++ wikisrc/users/haad/Attic/porting_zfs.mdwn 2009/11/12 00:08:15 1.8 @@ -15,41 +15,49 @@ Use any -current build from i386 or amd6 User need to - modload solaris - modload zfs + modload solaris + modload zfs After loading modules user can create zpool(zfs version of volume manager) and manage zfs file systems on it. -zpool create {name} {type} {device} + zpool create {zpool name} {type} {device} -type: - * mirror - * raidz - * raidz2 - * default is normal linear allocation +where type is: + +* mirror +* raidz +* raidz2 +* default is normal linear allocation device is blod device on netbsd /dev/sd0a for example. -zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions. + zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions. + +With zpool created we can create zfs filesystems or zvols(zfs logical volume disks) + + zfs create -V {size} tank/{zvol name} creates zvol with {zvol name} from ZPOOL called tank + +Logical disk is created in + + /dev/zvol/rdsk/{zpool name}/{zvol name} + /dev/zvol/dsk/{zpool name}/{zvol name} + + zfs create tank/{zfs name} create zfs filesystem on a zpool called tank ## Administration +After creating ZVOLS and filesystem they are saved in a /etc/zfs/zpool.cache file and loaded after nextzfs module load. + --- # 3. Known Bugs ## Show stoppers ### amd64 -* amd64 zio_root crash -Investigation -vdev_label_read_config -> zio_root calls zio_wait on a zio_root zio_t. Later zio_execute tries to generate -checksum on zio->zio_data which is NULL for a zio_root. Is this ressurrection of andys zio_null problem ? -because zio_root is basicaly zio_null. - - Solution - What is difference between i386 and amd64 version why it is working on i386 and not on a amd64 that can be - solution. + +amd64 related panic was fixed in version 1.1 of sys/arch/amd64/include/Makefile.inc by adding -mno-red-zone flag to amd64 modules build flags. + ### i386 ### Both @@ -69,8 +77,15 @@ because zio_root is basicaly zio_null. zfs umount panic is caused by using FreeBSD approach in zfs_reclaim. +* vnode fsync bug + + I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC. ## Functional bugs * Snapshots * Permissions * Old code, we should update NetBSD zfs port to new code +* More tasks can be found at + +# External Documentation links +Nice zfs howto written for zfs + mac os x