version 1.9, 2009/12/23 23:37:21
|
version 1.12, 2012/04/06 11:24:12
|
Line 7 NetBSD zfs port is work in progress and
|
Line 7 NetBSD zfs port is work in progress and
|
--- |
--- |
# 2. Using NetBSD ZFS port |
# 2. Using NetBSD ZFS port |
|
|
## Instalation |
## Installation |
|
|
Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions. |
Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions. |
|
|
Line 29 where type is:
|
Line 29 where type is:
|
* raidz2 |
* raidz2 |
* default is normal linear allocation |
* default is normal linear allocation |
|
|
device is blod device on netbsd /dev/sd0a for example. |
device is blod device on NetBSD /dev/sd0a for example. |
|
|
zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions. |
zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions. |
|
|
Line 60 After creating ZVOLS and filesystem they
|
Line 60 After creating ZVOLS and filesystem they
|
|
|
### Both |
### Both |
* vnode reclaiming deadlocks |
* vnode reclaiming deadlocks |
|
|
This can be fixed by deferring call to zfs_zinactive in zfs_reclaim to another system thread if lock is held. |
|
But it causes deadlock and we need to investigate if it is caused by this change or by another problem. |
|
|
|
Deadlock backtrace is this |
|
|
|
VOP_WRITE->zfs_netbsd_write->zfs_write->dmu_tx_wait->txg_wait_open->cv_wait |
|
txq_quisce_thread->txg_thread_wait->cv_wait |
|
txg_sync_thread->spa_sync->dsl_pool_sync->zio_wait->cv_wait |
|
|
|
FreeBSD approach should be investigated why they are doing this differently and why it works for them. They |
|
call zfs_zinactive from zfs_freebsd_inactive which is null op for NetBSD. |
|
|
|
zfs umount panic is caused by using FreeBSD approach in zfs_reclaim. |
|
|
|
|
Current code is quite a hack but it works until I will find some time to rework vnode reclaiming on NetBSD. |
|
|
|
http://nxr.netbsd.org/xref/src/external/cddl/osnet/dist/uts/common/fs/zfs/zfs_vnops.c#4254 |
|
|
* vnode fsync bug |
* vnode fsync bug |
|
|
I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC. |
I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC. |
|
|
|
* kmem_cache_alloc/ aka pool_cache_get panicsdue to KM_NOSLEEP flag |
|
There are some problems in then NetBSD UVM subsytem when KM_NOSLEEP allocation can fail even if system has enough memory to use. I talked with ad@ about it and he said that there is one problem in uvm where some lock is held, KM_NOSLEEP allocation will fail. |
|
|
## Functional bugs |
## Functional bugs |
* Snapshots |
* Snapshots |
* Permissions |
* Permissions |