File:  [NetBSD Developer Wiki] / wikisrc / zfs.mdwn
Revision 1.40: download - view: text, annotated - select for diffs
Thu Feb 18 00:28:44 2021 UTC (7 weeks, 5 days ago) by gdt
Branches: MAIN
CVS tags: HEAD
zfs: Note 80%

    1: # ZFS on NetBSD
    2: 
    3: This page attempts to do two things: provide enough orientation and
    4: pointers to standard ZFS documentation for NetBSD users who are new to
    5: ZFS, and to describe NetBSD-specific ZFS information.  It is
    6: emphatically not a tutorial or an introduction to ZFS.
    7: 
    8: Many things are marked with \todo because they need a better
    9: explanation, and some have question marks
   10: 
   11: # Status of ZFS in NetBSD
   12: 
   13: ## NetBSD 8
   14: 
   15: NetBSD 8 has an old version of ZFS, and it is not recommended for use
   16: at all.  There is no evidence that anyone is interested in helping
   17: with ZFS on 8.  Those wishing to use ZFS on NetBSD 8 should therefore
   18: update to NetBSD 9.
   19: 
   20: ## NetBSD 9
   21: 
   22: NetBSD-9 has ZFS that is considered to work well.  There have been
   23: fixes since 9.0_RELEASE.  As always, people running NetBSD 9 are
   24: likely best served by the most recent version of the netbsd-9 stable
   25: branch.  As of 2021-02, ZFS in the NetBSD 9.1 release is very close to
   26: netbsd-9.
   27: 
   28: ## NetBSD-current
   29: 
   30: NetBSD-current (as of 2021-02) has similar ZFS code to 9.
   31: 
   32: There is initial support for [[ZFS root|wiki/RootOnZFS]], via booting from
   33: ffs and pivoting.
   34: 
   35: ## NetBSD/xen special issues
   36: 
   37: Summary: if you are using NetBSD, xen and zfs, use NetBSD-current.
   38: 
   39: In NetBSD-9, MAXPHYS is 64KB in most places, but because of xbd(4) it
   40: is set to 32KB for XEN kernels.  Thus the standard zfs kernel modules
   41: do not work under xen.  In NetBSD-current, xbd(4) supports 64 KB
   42: MAXPHYS and this is no longer an issue.  Xen and zfs on current are
   43: reported to work well together, as of 2021-02.
   44: 
   45: ## Architectures
   46: 
   47: Most people seem to be using amd64.
   48: 
   49: To build zfs, one puts MKZFS=yes in mk.conf.  This is default on amd64
   50: and aarch64 on netbsd-9.  In current, it is also default on sparc64.
   51: 
   52: More or less, zfs can be enabled on an architecture when it is known
   53: to build and run reliably.  (Of course, users are welcome to build it
   54: and report.)
   55: 
   56: # Quick Start
   57: 
   58: See the [FreeBSD Quickstart
   59: Guide](https://www.freebsd.org/doc/handbook/zfs-quickstart.html); only
   60: the first item is NetBSD specific.
   61: 
   62:   - Put zfs=YES in rc.conf.
   63: 
   64:   - Create a pool as "zpool create pool1 /dev/dk0".
   65: 
   66:   - df and see /pool1
   67: 
   68:   - Create a filesystem mounted on /n0 as "zfs create -o
   69:     mountpoint=/n0 pool1/n0".
   70: 
   71:   - Read the documentation referenced in the next section.
   72: 
   73: ## Documentation Pointers
   74: 
   75: See the man pages for zfs(8), zpool(8).  Also see zdb(8), if only for
   76: seeing pool config info when run with no arguments.
   77: 
   78:   - [OpenZFS Documentation](https://openzfs.github.io/openzfs-docs/)
   79:   - [OpenZFS admin docs index page](https://github.com/openzfs/zfs/wiki/Admin-Documentation)
   80:   - [FreeBSD Handbook ZFS Chapter](https://www.freebsd.org/doc/handbook/zfs.html)
   81:   - [Oracle ZFS Administration Manual](https://docs.oracle.com/cd/E26505_01/html/E37384/index.html)
   82:   - [Wikipedia](https://en.wikipedia.org/wiki/ZFS)
   83: 
   84: # NetBSD-specific information
   85: 
   86: ## rc.conf
   87: 
   88: The main configuration is to put zfs=YES in rc.conf, so that the rc.d
   89: scripts bring up ZFS and mount ZFS file systems.
   90: 
   91: ## pool locations
   92: 
   93: One can add disks or parts of disks into pools.  Methods of specifying
   94: areas to be included include:
   95: 
   96:   - entire disks (e.g., /dev/wd0d on amd64, or /dev/wd0 which has the same major/minor)
   97:   - disklabel partitions (e.g., /dev/sd0e)
   98:   - wedges (e.g., /dev/dk0)
   99: 
  100: Information about created or imported pools is stored in
  101: /etc/zfs/zpool.cache.
  102: 
  103: Conventional wisdom is that a pool that is more than 80% used gets
  104: unhappy; so far there is not NetBSD-specific wisdom to confirm or
  105: refute that.
  106: 
  107: ## pool native blocksize mismatch
  108: 
  109: ZFS attempts to find out the native blocksize for a disk when using it
  110: in a pool; this is almost always 512 or 4096.  Somewhere between 9.0
  111: and 9.1, at least some disks on some controllers that used to report
  112: 512 now report 4096.  This provokes a blocksize mismatch warning.
  113: 
  114: Given that the native blocksize of the disk didn't change, and things
  115: seemed OK using the 512 emulated blocks, the warning is likely not
  116: critical.  However, it is also likely that rebuilding the pool with
  117: the 4096 blocksize is likely to result in better behavior because ZFS
  118: will only try to do 4096-byte writes.  \todo Verify this and find the
  119: actual change and explain better.
  120: 
  121: ## pool importing problems
  122: 
  123: While one can "zpool pool0 /dev/wd0f" and have a working pool, this
  124: pool cannot be exported and imported straigthforwardly.  "zpool
  125: export" works fine, and deletes zpool.cache.  "zpool import", however,
  126: only looks at entire disks (e.g. /dev/wd0), and might look at slices
  127: (e.g. /dev/dk0).  It does not look at partitions like /dev/wd0f, and
  128: there is no way on the command line to ask that specific devices be
  129: examined.  Thus, export/import fails for pools with disklabel
  130: partitions.
  131: 
  132: One can make wd0 be a link to wd0f temporarily, and the pool will then
  133: be importable.  However, "wd0" is stored in zpool.cache and on the
  134: next boot that will attempt to be used.  This is obviously not a good
  135: approach.
  136: 
  137: One an mkdir e.g. /etc/zfs/pool0 and in it have a symlink to
  138: /dev/wd0f.  Then, zpool import -d /etc/zfs/pool0 will scan
  139: /etc/zfs/pool0/wd0f and succeed.  The resulting zpool.cache will have
  140: that path, but having symlinks in /etc/zfs/POOLNAME seems acceptable.
  141: 
  142: \todo Determine a good fix, perhaps man page changes only, fix it
  143: upstream, in curent, and in 9, before removing this discussion.
  144: 
  145: ## mountpoint conventions
  146: 
  147: By default, datasets are mounted as /poolname/datasetname.  One can
  148: also set a mountpoint; see zfs(8).
  149: 
  150: There does not appear to be any reason to choose explicit mountpoints
  151: vs the default (and either using data in place or symlinking to it).
  152: 
  153: ## mount order
  154: 
  155: NetBSD 9 mounts other file systems and then ZFS file systems.  This can
  156: be a problem if /usr/pkgsrc is on ZFS and /usr/pkgsrc/distfiles is on
  157: NFS.  A workaround is to use noauto and do the mounts in
  158: /etc/rc.local.
  159: 
  160: NetBSD current after 20200301 mounts ZFS first.  The same issues and
  161: workarounds apply in different circumstances.
  162: 
  163: ## NFS
  164: 
  165: zfs filesystems can be exported via NFS, simply by placing them in
  166: /etc/exports like any other filesystem.
  167: 
  168: The "zfs share" command adds a line for each filesystem with the
  169: sharenfs property set to /etc/zfs/exports, and "zfs unshare" removes
  170: it.  This file is ignored on NetBSD-9 and current before 20210216; on
  171: current after 20210216 those filesystems should be exported (assuming
  172: NFS is enabled).  It does not appear to be possible to set options
  173: like maproot and network restrictions via this method.
  174: 
  175: On current before 20210216, a remote mkdir of a filesystem mounted via
  176: -maproot=0:10 causes a kernel NULL pointer dereference.  This is now
  177: fixed.
  178: 
  179: ## zvol
  180: 
  181: Within a ZFS pool, the standard approach is to have file systems, but
  182: one can also create a zvol, which is a block device of a certain size.
  183: 
  184: As an example, "zfs create -V 16G tank0/xen-netbsd-9-amd64" creates a
  185: zvol (intended to be a virtual disk for a domU).
  186: 
  187: The zvol in the example will appear as
  188: /dev/zvol/rdsk/tank0/xen-netbsd-9-amd64 and
  189: /dev/zvol/dsk/tank0/xen-netbsd-9-amd64 and can be used like a
  190: disklabel partition or wedge.  However, the system will not read
  191: disklabels and gpt labels from a zvol.
  192: 
  193: Doing "swapctl -a" on a zvol device node fails.  \todo Is it really
  194: true that NetBSD can't swap on a zvol?  (When using a zvol for swap,
  195: standard advice is to avoid the "-s" option which avoids reserving the
  196: allocated space.  Standard advice is also to consider using a
  197: dedicated pool.)
  198: 
  199: \todo Explain that one can export a zvol via iscsi.
  200: 
  201: One can use ccd to create a normal-looking disk from a zvol.  This
  202: allows reading a GPT label from the zvol, which is useful in case the
  203: zvol had been exported via iscsi and some other system created a
  204: label.
  205: 
  206: # Memory usage
  207: 
  208: Basically, ZFS uses lots of memory and most people run it on systems
  209: with large amounts of memory.  NetBSD works well on systems with
  210: comparatively small amounts of memory.  So a natural question is how
  211: well ZFS works on one's VAX with 2M of RAM :-) More seriously, one
  212: might ask if it is reasonable to run ZFS on a RPI3 with 1G of RAM, or
  213: if it is reasonable on a system with 4G.
  214: 
  215: The prevailing wisdom is more or less that ZFS consumes 1G plus 1G per
  216: 1T of disk.  32-bit architectures are viewed as too small to run ZFS.
  217: 
  218: Besides RAM, zfs requires that architecture kernel stack size is at
  219: least 12KB or more -- some operations cause stack overflow with 8KB
  220: kernel stack. On NetBSD, the architectures with 16KB kernel stack are
  221: amd64, sparc64, powerpc, and experimental ia64, hppa. mac68k and sh3
  222: have 12KB kernel stack. All others use only 8KB stack, which is not
  223: enough to run zfs.
  224: 
  225: NetBSD has many statistics provided via sysctl; see "sysctl
  226: kstat.zfs".
  227: 
  228: FreeBSD has tunables that NetBSD does not seem to have, described in
  229: [FreeBSD Handbook ZFS Advanced
  230: section](https://docs.freebsd.org/en/books/handbook/zfs/#zfs-advanced).
  231: 
  232: # Interoperability with other systems
  233: 
  234: Modern ZFS uses pool version 5000 and feature flags.
  235: 
  236: It is in general possible to export a pool and them import the pool on
  237: some other system, as long as the other system supports all the used
  238: features.
  239: 
  240: \todo Explain how to do this and what is known to work.
  241: 
  242: \todo Explain feature flags relationship to FreeBSD, Linux, iIllumos,
  243: macOS.
  244: 
  245: # Sources of ZFS code
  246: 
  247: Currently, there are multiple ZFS projects and codebases:
  248: 
  249:   - [OpenZFS](http://www.open-zfs.org/wiki/Main_Page)
  250:   - [openzfs repository](https://github.com/openzfs/zfs)
  251:   - [zfsonlinux](https://zfsonlinux.org/)
  252:   - [OpenZFS on OS X ](https://openzfsonosx.org/) [repo](https://github.com/openzfsonosx)
  253:   - proprietary ZFS in Solaris (not relevant in open source)
  254:   - ZFS as released under the CDDL (common ancestor, now of historical interest)
  255: 
  256: OpenZFS is a coordinating project to align open ZFS codebases.  There
  257: is a notion of a shared core codebase and OS-specific adaptation code.
  258: 
  259:   - [zfsonlinux relationship to OpenZFS](https://github.com/openzfs/zfs/wiki/OpenZFS-Patches)
  260:   - FreeBSD more or less imports code from openzfs and pushes back fixes. \todo Verify this.
  261:   - NetBSD has imported code from FreeBSD.
  262:   - The status of ZFS on macOS is unclear (2021-02).

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb