File:  [NetBSD Developer Wiki] / wikisrc / users / haad / porting_zfs.mdwn
Revision 1.2: download - view: text, annotated - select for diffs
Fri Oct 23 23:47:20 2009 UTC (4 years, 5 months ago) by wiki
Branches: MAIN
CVS tags: HEAD
web commit by haad

# 1. Status of NetBSD zfs port
 
NetBSD zfs port is work in progress and can easily panic your system.

**ZFS currently works ony on i386!!**

---
# 2. Using NetBSD ZFS port 

## Instalation 

Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions.

## Configuration

User need to

 modload solaris
 modload zfs

After loading modules user can create zpool(zfs version of volume manager) and manage zfs file systems on it. 

zpool create {name} {type} {device} 

type:
 * mirror
 * raidz
 * raidz2
 * default is normal linear allocation

device is blod device on netbsd /dev/sd0a for example.

zpool create tank mirror /dev/sd0a /dev/sd1a  creates mirrored zpool between 2 disk partitions.


## Administration

---
# 3. Known Bugs

## Show stoppers 

### amd64
* amd64 zio_root crash
Investigation 
vdev_label_read_config -> zio_root calls zio_wait on a zio_root zio_t. Later zio_execute tries to generate 			
checksum on zio->zio_data which is NULL for a zio_root. Is this ressurrection of andys zio_null problem ? 
because zio_root is basicaly zio_null.
		
 Solution
 What is difference between i386 and amd64 version why it is working on i386 and not on a amd64 that can be
 solution.
### i386 

### Both 
* vnode reclaiming deadlocks 
 
 This can be fixed by deferring call to zfs_zinactive in zfs_reclaim to another system thread if lock is held.
 But it causes deadlock and we need to investigate if it is caused by this change or by another problem.
			
 Deadlock backtrace is this
			
 VOP_WRITE->zfs_netbsd_write->zfs_write->dmu_tx_wait->txg_wait_open->cv_wait
 txq_quisce_thread->txg_thread_wait->cv_wait
 txg_sync_thread->spa_sync->dsl_pool_sync->zio_wait->cv_wait
			
 FreeBSD approach should be investigated why they are doing this differently and why it works for them. They 
 call zfs_zinactive from zfs_freebsd_inactive which is null op for NetBSD.
			
 zfs umount panic is caused by using FreeBSD approach in zfs_reclaim.


## Functional bugs 
* Snapshots
* Permissions
* Old code, we should update NetBSD zfs port to new code

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb