# 1. Status of NetBSD zfs port
NetBSD zfs port is work in progress and can easily panic your system.
**ZFS currently works ony on i386!!**
---
# 2. Using NetBSD ZFS port
## Instalation
Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions.
## Configuration
User need to
modload solaris
modload zfs
After loading modules user can create zpool(zfs version of volume manager) and manage zfs file systems on it.
zpool create {zpool name} {type} {device}
where type is:
* mirror
* raidz
* raidz2
* default is normal linear allocation
device is blod device on netbsd /dev/sd0a for example.
zpool create tank mirror /dev/sd0a /dev/sd1a creates mirrored zpool between 2 disk partitions.
With zpool created we can create zfs filesystems or zvols(zfs logical volume disks)
zfs create -V {size} tank/{zvol name} creates zvol with {zvol name} from ZPOOL called tank
Logical disk is created in
/dev/zvol/rdsk/{zpool name}/{zvol name}
/dev/zvol/dsk/{zpool name}/{zvol name}
zfs create tank/{zfs name} create zfs filesystem on a zpool called tank
## Administration
After creating ZVOLS and filesystem they are saved in a /etc/zfs/zpool.cache file and loaded after nextzfs module load.
---
# 3. Known Bugs
## Show stoppers
### amd64
* amd64 zio_root crash
Investigation
vdev_label_read_config -> zio_root calls zio_wait on a zio_root zio_t. Later zio_execute tries to generate
checksum on zio->zio_data which is NULL for a zio_root. Is this ressurrection of andys zio_null problem ?
because zio_root is basicaly zio_null.
Solution
What is difference between i386 and amd64 version why it is working on i386 and not on a amd64 that can be
solution.
### i386
### Both
* vnode reclaiming deadlocks
This can be fixed by deferring call to zfs_zinactive in zfs_reclaim to another system thread if lock is held.
But it causes deadlock and we need to investigate if it is caused by this change or by another problem.
Deadlock backtrace is this
VOP_WRITE->zfs_netbsd_write->zfs_write->dmu_tx_wait->txg_wait_open->cv_wait
txq_quisce_thread->txg_thread_wait->cv_wait
txg_sync_thread->spa_sync->dsl_pool_sync->zio_wait->cv_wait
FreeBSD approach should be investigated why they are doing this differently and why it works for them. They
call zfs_zinactive from zfs_freebsd_inactive which is null op for NetBSD.
zfs umount panic is caused by using FreeBSD approach in zfs_reclaim.
* vnode fsync bug
I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC.
## Functional bugs
* Snapshots
* Permissions
* Old code, we should update NetBSD zfs port to new code
CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb