--- wikisrc/users/haad/Attic/porting_zfs.mdwn 2009/10/23 23:47:20 1.2 +++ wikisrc/users/haad/Attic/porting_zfs.mdwn 2012/04/06 11:42:15 1.14 @@ -2,12 +2,12 @@ NetBSD zfs port is work in progress and can easily panic your system. -**ZFS currently works ony on i386!!** +**ZFS currently works ony on i386 and amd64!!** --- # 2. Using NetBSD ZFS port -## Instalation +## Installation Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions. @@ -15,62 +15,68 @@ Use any -current build from i386 or amd6 User need to - modload solaris - modload zfs + modload solaris + modload zfs After loading modules user can create zpool(zfs version of volume manager) and manage zfs file systems on it. -zpool create {name} {type} {device} + zpool create {zpool name} {type} {device} -type: - * mirror - * raidz - * raidz2 - * default is normal linear allocation +where type is: -device is blod device on netbsd /dev/sd0a for example. +* mirror +* raidz +* raidz2 +* default is normal linear allocation -zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions. +device is blod device on NetBSD /dev/sd0a for example. + zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions. + +With zpool created we can create zfs filesystems or zvols(zfs logical volume disks) + + zfs create -V {size} tank/{zvol name} creates zvol with {zvol name} from ZPOOL called tank + +Logical disk is created in + + /dev/zvol/rdsk/{zpool name}/{zvol name} + + /dev/zvol/dsk/{zpool name}/{zvol name} + + zfs create tank/{zfs name} create zfs filesystem on a zpool called tank ## Administration +After creating ZVOLS and filesystem they are saved in a /etc/zfs/zpool.cache file and loaded after nextzfs module load. + --- # 3. Known Bugs ## Show stoppers ### amd64 -* amd64 zio_root crash -Investigation -vdev_label_read_config -> zio_root calls zio_wait on a zio_root zio_t. Later zio_execute tries to generate -checksum on zio->zio_data which is NULL for a zio_root. Is this ressurrection of andys zio_null problem ? -because zio_root is basicaly zio_null. - - Solution - What is difference between i386 and amd64 version why it is working on i386 and not on a amd64 that can be - solution. + ### i386 ### Both * vnode reclaiming deadlocks + +Current code is quite a hack but it works until I will find some time to rework vnode reclaiming on NetBSD. + +http://nxr.netbsd.org/xref/src/external/cddl/osnet/dist/uts/common/fs/zfs/zfs_vnops.c#4254 - This can be fixed by deferring call to zfs_zinactive in zfs_reclaim to another system thread if lock is held. - But it causes deadlock and we need to investigate if it is caused by this change or by another problem. - - Deadlock backtrace is this - - VOP_WRITE->zfs_netbsd_write->zfs_write->dmu_tx_wait->txg_wait_open->cv_wait - txq_quisce_thread->txg_thread_wait->cv_wait - txg_sync_thread->spa_sync->dsl_pool_sync->zio_wait->cv_wait - - FreeBSD approach should be investigated why they are doing this differently and why it works for them. They - call zfs_zinactive from zfs_freebsd_inactive which is null op for NetBSD. - - zfs umount panic is caused by using FreeBSD approach in zfs_reclaim. +* vnode fsync bug + + I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC. +* kmem_cache_alloc/ aka pool_cache_get panicsdue to KM_NOSLEEP flag + There are some problems in the NetBSD UVM subsytem when KM_NOSLEEP allocation can fail even if system has enough memory to use. I talked with ad@ about it and he said that there is one problem in uvm where some lock is held, KM_NOSLEEP allocation will fail. ## Functional bugs * Snapshots * Permissions * Old code, we should update NetBSD zfs port to new code +* More tasks can be found at + +# External Documentation links +Nice zfs howto written for zfs + mac os x