File:  [NetBSD Developer Wiki] / wikisrc / users / haad / Attic / porting_zfs.mdwn
Revision 1.5: download - view: text, annotated - select for diffs
Sat Oct 24 00:04:44 2009 UTC (12 years ago) by wiki
Branches: MAIN
CVS tags: HEAD
web commit by haad

    1: # 1. Status of NetBSD zfs port
    2:  
    3: NetBSD zfs port is work in progress and can easily panic your system.
    4: 
    5: **ZFS currently works ony on i386!!**
    6: 
    7: ---
    8: # 2. Using NetBSD ZFS port 
    9: 
   10: ## Instalation 
   11: 
   12: Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions.
   13: 
   14: ## Configuration
   15: 
   16: User need to
   17: 
   18:     modload solaris
   19:     modload zfs
   20: 
   21: After loading modules user can create zpool(zfs version of volume manager) and manage zfs file systems on it. 
   22: 
   23:     zpool create {zpool name} {type} {device} 
   24: 
   25: where type is:
   26: 
   27: * mirror
   28: * raidz
   29: * raidz2
   30: * default is normal linear allocation
   31: 
   32: device is blod device on netbsd /dev/sd0a for example.
   33: 
   34:     zpool create tank mirror /dev/sd0a /dev/sd1a  creates mirrored zpool between 2 disk partitions.
   35: 
   36: With zpool created we can create zfs filesystems or zvols(zfs logical volume disks)
   37: 
   38:     zfs create -V {size} tank/{zvol name} creates zvol with {zvol name} from ZPOOL called tank
   39: 
   40: Logical disk is created in 
   41: 
   42:     /dev/zvol/rdsk/{zpool name}/{zvol name} 
   43: 
   44:     /dev/zvol/dsk/{zpool name}/{zvol name}
   45: 
   46:     zfs create tank/{zfs name} create zfs filesystem on a zpool called tank 
   47: 
   48: ## Administration
   49: 
   50: After creating ZVOLS and filesystem they are saved in a /etc/zfs/zpool.cache file and loaded after nextzfs module load.
   51: 
   52: ---
   53: # 3. Known Bugs
   54: 
   55: ## Show stoppers 
   56: 
   57: ### amd64
   58: * amd64 zio_root crash
   59: Investigation 
   60: vdev_label_read_config -> zio_root calls zio_wait on a zio_root zio_t. Later zio_execute tries to generate 			
   61: checksum on zio->zio_data which is NULL for a zio_root. Is this ressurrection of andys zio_null problem ? 
   62: because zio_root is basicaly zio_null.
   63: 		
   64:  Solution
   65:  What is difference between i386 and amd64 version why it is working on i386 and not on a amd64 that can be
   66:  solution.
   67: ### i386 
   68: 
   69: ### Both 
   70: * vnode reclaiming deadlocks 
   71:  
   72:  This can be fixed by deferring call to zfs_zinactive in zfs_reclaim to another system thread if lock is held.
   73:  But it causes deadlock and we need to investigate if it is caused by this change or by another problem.
   74: 			
   75:  Deadlock backtrace is this
   76: 			
   77:  VOP_WRITE->zfs_netbsd_write->zfs_write->dmu_tx_wait->txg_wait_open->cv_wait
   78:  txq_quisce_thread->txg_thread_wait->cv_wait
   79:  txg_sync_thread->spa_sync->dsl_pool_sync->zio_wait->cv_wait
   80: 			
   81:  FreeBSD approach should be investigated why they are doing this differently and why it works for them. They 
   82:  call zfs_zinactive from zfs_freebsd_inactive which is null op for NetBSD.
   83: 			
   84:  zfs umount panic is caused by using FreeBSD approach in zfs_reclaim.
   85: 
   86: * vnode fsync bug 
   87:   
   88:   I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC.
   89: 
   90: ## Functional bugs 
   91: * Snapshots
   92: * Permissions
   93: * Old code, we should update NetBSD zfs port to new code
   94: * More tasks can be found at [http://nxr.netbsd.org/xref/src/external/cddl/osnet/TODO]

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb