Annotation of wikisrc/users/haad/porting_zfs.mdwn, revision 1.6

1.1       wiki        1: # 1. Status of NetBSD zfs port
                      2:  
                      3: NetBSD zfs port is work in progress and can easily panic your system.
1.2       wiki        4: 
                      5: **ZFS currently works ony on i386!!**
                      6: 
1.1       wiki        7: ---
                      8: # 2. Using NetBSD ZFS port 
                      9: 
1.2       wiki       10: ## Instalation 
1.1       wiki       11: 
1.2       wiki       12: Use any -current build from i386 or amd64 architecture. All tools and modules should be built by default, now. There are 2 modules used for ZFS solaris.kmod and zfs.kmod. Solaris module provides solaris like interfaces to zfs on NetBSD. The second module is zfs and provides zfs file system functions.
1.1       wiki       13: 
                     14: ## Configuration
                     15: 
1.2       wiki       16: User need to
                     17: 
1.5       wiki       18:     modload solaris
                     19:     modload zfs
1.2       wiki       20: 
                     21: After loading modules user can create zpool(zfs version of volume manager) and manage zfs file systems on it. 
                     22: 
1.5       wiki       23:     zpool create {zpool name} {type} {device} 
1.2       wiki       24: 
1.3       wiki       25: where type is:
                     26: 
                     27: * mirror
                     28: * raidz
                     29: * raidz2
                     30: * default is normal linear allocation
1.2       wiki       31: 
                     32: device is blod device on netbsd /dev/sd0a for example.
                     33: 
1.5       wiki       34:     zpool create tank mirror /dev/sd0a /dev/sd1a  creates mirrored zpool between 2 disk partitions.
1.2       wiki       35: 
1.3       wiki       36: With zpool created we can create zfs filesystems or zvols(zfs logical volume disks)
                     37: 
1.5       wiki       38:     zfs create -V {size} tank/{zvol name} creates zvol with {zvol name} from ZPOOL called tank
1.3       wiki       39: 
                     40: Logical disk is created in 
                     41: 
1.5       wiki       42:     /dev/zvol/rdsk/{zpool name}/{zvol name} 
1.4       wiki       43: 
1.5       wiki       44:     /dev/zvol/dsk/{zpool name}/{zvol name}
1.3       wiki       45: 
1.5       wiki       46:     zfs create tank/{zfs name} create zfs filesystem on a zpool called tank 
1.2       wiki       47: 
1.1       wiki       48: ## Administration
                     49: 
1.4       wiki       50: After creating ZVOLS and filesystem they are saved in a /etc/zfs/zpool.cache file and loaded after nextzfs module load.
                     51: 
1.1       wiki       52: ---
                     53: # 3. Known Bugs
                     54: 
                     55: ## Show stoppers 
                     56: 
                     57: ### amd64
                     58: * amd64 zio_root crash
                     59: Investigation 
                     60: vdev_label_read_config -> zio_root calls zio_wait on a zio_root zio_t. Later zio_execute tries to generate                     
                     61: checksum on zio->zio_data which is NULL for a zio_root. Is this ressurrection of andys zio_null problem ? 
                     62: because zio_root is basicaly zio_null.
                     63:                
                     64:  Solution
                     65:  What is difference between i386 and amd64 version why it is working on i386 and not on a amd64 that can be
                     66:  solution.
                     67: ### i386 
                     68: 
                     69: ### Both 
                     70: * vnode reclaiming deadlocks 
                     71:  
                     72:  This can be fixed by deferring call to zfs_zinactive in zfs_reclaim to another system thread if lock is held.
                     73:  But it causes deadlock and we need to investigate if it is caused by this change or by another problem.
                     74:                        
                     75:  Deadlock backtrace is this
                     76:                        
                     77:  VOP_WRITE->zfs_netbsd_write->zfs_write->dmu_tx_wait->txg_wait_open->cv_wait
                     78:  txq_quisce_thread->txg_thread_wait->cv_wait
                     79:  txg_sync_thread->spa_sync->dsl_pool_sync->zio_wait->cv_wait
                     80:                        
                     81:  FreeBSD approach should be investigated why they are doing this differently and why it works for them. They 
                     82:  call zfs_zinactive from zfs_freebsd_inactive which is null op for NetBSD.
                     83:                        
                     84:  zfs umount panic is caused by using FreeBSD approach in zfs_reclaim.
                     85: 
1.4       wiki       86: * vnode fsync bug 
                     87:   
                     88:   I think that we are hitting this bug, too. I have some patches in the tree which fixes deadlod in vnode reclaim but after that I'm getting another deadlock in VOP_FSYNC.
1.1       wiki       89: 
                     90: ## Functional bugs 
                     91: * Snapshots
                     92: * Permissions
                     93: * Old code, we should update NetBSD zfs port to new code
1.5       wiki       94: * More tasks can be found at [http://nxr.netbsd.org/xref/src/external/cddl/osnet/TODO]
1.6     ! wiki       95: 
        !            96: # External Documentation links
        !            97: Nice zfs howto written for zfs + mac os x [http://code.google.com/p/maczfs/wiki/GettingStarted]

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb