--- wikisrc/users/haad/Attic/porting_zfs.mdwn 2009/10/26 23:19:05 1.6 +++ wikisrc/users/haad/Attic/porting_zfs.mdwn 2012/04/06 11:42:15 1.14 @@ -2,12 +2,12 @@ NetBSD zfs port is work in progress and can easily panic your system. -**ZFS currently works ony on i386!!** +**ZFS currently works ony on i386 and amd64!!** --- # 2. Using NetBSD ZFS port -## Instalation +## Installation Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions. @@ -29,7 +29,7 @@ where type is: * raidz2 * default is normal linear allocation -device is blod device on netbsd /dev/sd0a for example. +device is blod device on NetBSD /dev/sd0a for example. zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions. @@ -55,43 +55,28 @@ After creating ZVOLS and filesystem they ## Show stoppers ### amd64 -* amd64 zio_root crash -Investigation -vdev_label_read_config -> zio_root calls zio_wait on a zio_root zio_t. Later zio_execute tries to generate -checksum on zio->zio_data which is NULL for a zio_root. Is this ressurrection of andys zio_null problem ? -because zio_root is basicaly zio_null. - - Solution - What is difference between i386 and amd64 version why it is working on i386 and not on a amd64 that can be - solution. + ### i386 ### Both * vnode reclaiming deadlocks - - This can be fixed by deferring call to zfs_zinactive in zfs_reclaim to another system thread if lock is held. - But it causes deadlock and we need to investigate if it is caused by this change or by another problem. - - Deadlock backtrace is this - - VOP_WRITE->zfs_netbsd_write->zfs_write->dmu_tx_wait->txg_wait_open->cv_wait - txq_quisce_thread->txg_thread_wait->cv_wait - txg_sync_thread->spa_sync->dsl_pool_sync->zio_wait->cv_wait - - FreeBSD approach should be investigated why they are doing this differently and why it works for them. They - call zfs_zinactive from zfs_freebsd_inactive which is null op for NetBSD. - - zfs umount panic is caused by using FreeBSD approach in zfs_reclaim. +Current code is quite a hack but it works until I will find some time to rework vnode reclaiming on NetBSD. + +http://nxr.netbsd.org/xref/src/external/cddl/osnet/dist/uts/common/fs/zfs/zfs_vnops.c#4254 + * vnode fsync bug I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC. +* kmem_cache_alloc/ aka pool_cache_get panicsdue to KM_NOSLEEP flag + There are some problems in the NetBSD UVM subsytem when KM_NOSLEEP allocation can fail even if system has enough memory to use. I talked with ad@ about it and he said that there is one problem in uvm where some lock is held, KM_NOSLEEP allocation will fail. + ## Functional bugs * Snapshots * Permissions * Old code, we should update NetBSD zfs port to new code -* More tasks can be found at [http://nxr.netbsd.org/xref/src/external/cddl/osnet/TODO] +* More tasks can be found at # External Documentation links -Nice zfs howto written for zfs + mac os x [http://code.google.com/p/maczfs/wiki/GettingStarted] +Nice zfs howto written for zfs + mac os x