Diff for /wikisrc/users/haad/Attic/porting_zfs.mdwn between versions 1.1 and 1.12

version 1.1, 2009/10/22 04:28:54 version 1.12, 2012/04/06 11:24:12
Line 1 Line 1
 # 1. Status of NetBSD zfs port  # 1. Status of NetBSD zfs port
     
 NetBSD zfs port is work in progress and can easily panic your system.  NetBSD zfs port is work in progress and can easily panic your system.
   
   **ZFS currently works ony on i386 and amd64!!**
   
 ---  ---
 # 2. Using NetBSD ZFS port   # 2. Using NetBSD ZFS port 
   
   ## Installation 
   
 ## Instalation   Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions.
   
 ## Configuration  ## Configuration
   
   User need to
   
       modload solaris
       modload zfs
   
   After loading modules user can create zpool(zfs version of volume manager) and manage zfs file systems on it. 
   
       zpool create {zpool name} {type} {device} 
   
   where type is:
   
   * mirror
   * raidz
   * raidz2
   * default is normal linear allocation
   
   device is blod device on NetBSD /dev/sd0a for example.
   
       zpool create tank mirror /dev/sd0a /dev/sd1a  creates mirrored zpool between 2 disk partitions.
   
   With zpool created we can create zfs filesystems or zvols(zfs logical volume disks)
   
       zfs create -V {size} tank/{zvol name} creates zvol with {zvol name} from ZPOOL called tank
   
   Logical disk is created in 
   
       /dev/zvol/rdsk/{zpool name}/{zvol name} 
   
       /dev/zvol/dsk/{zpool name}/{zvol name}
   
       zfs create tank/{zfs name} create zfs filesystem on a zpool called tank 
   
 ## Administration  ## Administration
   
   After creating ZVOLS and filesystem they are saved in a /etc/zfs/zpool.cache file and loaded after nextzfs module load.
   
 ---  ---
 # 3. Known Bugs  # 3. Known Bugs
   
 ## Show stoppers   ## Show stoppers 
   
 ### amd64  ### amd64
 * amd64 zio_root crash  
 Investigation   
 vdev_label_read_config -> zio_root calls zio_wait on a zio_root zio_t. Later zio_execute tries to generate                        
 checksum on zio->zio_data which is NULL for a zio_root. Is this ressurrection of andys zio_null problem ?   
 because zio_root is basicaly zio_null.  
                   
  Solution  
  What is difference between i386 and amd64 version why it is working on i386 and not on a amd64 that can be  
  solution.  
 ### i386   ### i386 
   
 ### Both   ### Both 
 * vnode reclaiming deadlocks   * vnode reclaiming deadlocks 
   
   Current code is quite a hack but it works until I will find some time to rework vnode reclaiming on NetBSD.
   
   http://nxr.netbsd.org/xref/src/external/cddl/osnet/dist/uts/common/fs/zfs/zfs_vnops.c#4254
     
  This can be fixed by deferring call to zfs_zinactive in zfs_reclaim to another system thread if lock is held.  * vnode fsync bug 
  But it causes deadlock and we need to investigate if it is caused by this change or by another problem.    
                             I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC.
  Deadlock backtrace is this  
                           
  VOP_WRITE->zfs_netbsd_write->zfs_write->dmu_tx_wait->txg_wait_open->cv_wait  
  txq_quisce_thread->txg_thread_wait->cv_wait  
  txg_sync_thread->spa_sync->dsl_pool_sync->zio_wait->cv_wait  
                           
  FreeBSD approach should be investigated why they are doing this differently and why it works for them. They   
  call zfs_zinactive from zfs_freebsd_inactive which is null op for NetBSD.  
                           
  zfs umount panic is caused by using FreeBSD approach in zfs_reclaim.  
   
   * kmem_cache_alloc/ aka pool_cache_get panicsdue to KM_NOSLEEP flag
    There are some problems in then NetBSD UVM subsytem when KM_NOSLEEP allocation can fail even if system has enough memory to use. I talked with ad@ about it and he said that there is one problem in uvm where some lock is held, KM_NOSLEEP allocation will fail.
   
 ## Functional bugs   ## Functional bugs 
 * Snapshots  * Snapshots
 * Permissions  * Permissions
 * Old code, we should update NetBSD zfs port to new code  * Old code, we should update NetBSD zfs port to new code
   * More tasks can be found at <http://nxr.netbsd.org/xref/src/external/cddl/osnet/TODO>
   
   # External Documentation links
   Nice zfs howto written for zfs + mac os x <http://code.google.com/p/maczfs/wiki/GettingStarted>

Removed from v.1.1  
changed lines
  Added in v.1.12


CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb