Annotation of wikisrc/zfs.mdwn, revision 1.41
1.1 gdt 1: # ZFS on NetBSD
2:
1.11 gdt 3: This page attempts to do two things: provide enough orientation and
1.13 wiz 4: pointers to standard ZFS documentation for NetBSD users who are new to
1.11 gdt 5: ZFS, and to describe NetBSD-specific ZFS information. It is
1.1 gdt 6: emphatically not a tutorial or an introduction to ZFS.
7:
8: Many things are marked with \todo because they need a better
1.28 gdt 9: explanation, and some have question marks
1.1 gdt 10:
11: # Status of ZFS in NetBSD
12:
1.27 gdt 13: ## NetBSD 8
1.23 gdt 14:
1.27 gdt 15: NetBSD 8 has an old version of ZFS, and it is not recommended for use
16: at all. There is no evidence that anyone is interested in helping
17: with ZFS on 8. Those wishing to use ZFS on NetBSD 8 should therefore
18: update to NetBSD 9.
1.1 gdt 19:
20: ## NetBSD 9
21:
1.27 gdt 22: NetBSD-9 has ZFS that is considered to work well. There have been
23: fixes since 9.0_RELEASE. As always, people running NetBSD 9 are
24: likely best served by the most recent version of the netbsd-9 stable
25: branch. As of 2021-02, ZFS in the NetBSD 9.1 release is very close to
26: netbsd-9.
1.1 gdt 27:
1.41 ! gdt 28: There is a crash with mkdir over NFS with maproot. See http://gnats.netbsd.org/55042
! 29:
! 30: There is a credible rumor, but apparently no PR, that rm spuriously
! 31: commits the ZIL, which hurts performance.
! 32:
! 33: There has been a report of an occasional panic somewhere in
! 34: zfs_putpages.
! 35:
1.27 gdt 36: ## NetBSD-current
1.1 gdt 37:
1.27 gdt 38: NetBSD-current (as of 2021-02) has similar ZFS code to 9.
1.5 gdt 39:
1.9 gdt 40: There is initial support for [[ZFS root|wiki/RootOnZFS]], via booting from
1.5 gdt 41: ffs and pivoting.
1.1 gdt 42:
1.27 gdt 43: ## NetBSD/xen special issues
1.1 gdt 44:
1.36 gdt 45: Summary: if you are using NetBSD, xen and zfs, use NetBSD-current.
46:
1.27 gdt 47: In NetBSD-9, MAXPHYS is 64KB in most places, but because of xbd(4) it
48: is set to 32KB for XEN kernels. Thus the standard zfs kernel modules
49: do not work under xen. In NetBSD-current, xbd(4) supports 64 KB
1.36 gdt 50: MAXPHYS and this is no longer an issue. Xen and zfs on current are
51: reported to work well together, as of 2021-02.
1.1 gdt 52:
53: ## Architectures
54:
55: Most people seem to be using amd64.
56:
57: To build zfs, one puts MKZFS=yes in mk.conf. This is default on amd64
58: and aarch64 on netbsd-9. In current, it is also default on sparc64.
59:
60: More or less, zfs can be enabled on an architecture when it is known
61: to build and run reliably. (Of course, users are welcome to build it
62: and report.)
63:
1.27 gdt 64: # Quick Start
65:
66: See the [FreeBSD Quickstart
67: Guide](https://www.freebsd.org/doc/handbook/zfs-quickstart.html); only
68: the first item is NetBSD specific.
69:
70: - Put zfs=YES in rc.conf.
71:
72: - Create a pool as "zpool create pool1 /dev/dk0".
73:
74: - df and see /pool1
75:
76: - Create a filesystem mounted on /n0 as "zfs create -o
77: mountpoint=/n0 pool1/n0".
78:
1.36 gdt 79: - Read the documentation referenced in the next section.
80:
81: ## Documentation Pointers
82:
83: See the man pages for zfs(8), zpool(8). Also see zdb(8), if only for
84: seeing pool config info when run with no arguments.
85:
86: - [OpenZFS Documentation](https://openzfs.github.io/openzfs-docs/)
87: - [OpenZFS admin docs index page](https://github.com/openzfs/zfs/wiki/Admin-Documentation)
88: - [FreeBSD Handbook ZFS Chapter](https://www.freebsd.org/doc/handbook/zfs.html)
89: - [Oracle ZFS Administration Manual](https://docs.oracle.com/cd/E26505_01/html/E37384/index.html)
90: - [Wikipedia](https://en.wikipedia.org/wiki/ZFS)
1.27 gdt 91:
1.1 gdt 92: # NetBSD-specific information
93:
94: ## rc.conf
95:
96: The main configuration is to put zfs=YES in rc.conf, so that the rc.d
1.13 wiz 97: scripts bring up ZFS and mount ZFS file systems.
1.1 gdt 98:
1.14 gdt 99: ## pool locations
100:
101: One can add disks or parts of disks into pools. Methods of specifying
102: areas to be included include:
103:
1.29 gdt 104: - entire disks (e.g., /dev/wd0d on amd64, or /dev/wd0 which has the same major/minor)
1.14 gdt 105: - disklabel partitions (e.g., /dev/sd0e)
106: - wedges (e.g., /dev/dk0)
107:
1.33 gdt 108: Information about created or imported pools is stored in
109: /etc/zfs/zpool.cache.
110:
1.40 gdt 111: Conventional wisdom is that a pool that is more than 80% used gets
112: unhappy; so far there is not NetBSD-specific wisdom to confirm or
113: refute that.
114:
1.37 gdt 115: ## pool native blocksize mismatch
1.36 gdt 116:
117: ZFS attempts to find out the native blocksize for a disk when using it
118: in a pool; this is almost always 512 or 4096. Somewhere between 9.0
119: and 9.1, at least some disks on some controllers that used to report
120: 512 now report 4096. This provokes a blocksize mismatch warning.
121:
122: Given that the native blocksize of the disk didn't change, and things
123: seemed OK using the 512 emulated blocks, the warning is likely not
124: critical. However, it is also likely that rebuilding the pool with
125: the 4096 blocksize is likely to result in better behavior because ZFS
126: will only try to do 4096-byte writes. \todo Verify this and find the
127: actual change and explain better.
128:
1.33 gdt 129: ## pool importing problems
130:
131: While one can "zpool pool0 /dev/wd0f" and have a working pool, this
132: pool cannot be exported and imported straigthforwardly. "zpool
1.35 gdt 133: export" works fine, and deletes zpool.cache. "zpool import", however,
134: only looks at entire disks (e.g. /dev/wd0), and might look at slices
135: (e.g. /dev/dk0). It does not look at partitions like /dev/wd0f, and
136: there is no way on the command line to ask that specific devices be
137: examined. Thus, export/import fails for pools with disklabel
138: partitions.
1.33 gdt 139:
140: One can make wd0 be a link to wd0f temporarily, and the pool will then
141: be importable. However, "wd0" is stored in zpool.cache and on the
142: next boot that will attempt to be used. This is obviously not a good
143: approach.
144:
145: One an mkdir e.g. /etc/zfs/pool0 and in it have a symlink to
146: /dev/wd0f. Then, zpool import -d /etc/zfs/pool0 will scan
147: /etc/zfs/pool0/wd0f and succeed. The resulting zpool.cache will have
148: that path, but having symlinks in /etc/zfs/POOLNAME seems acceptable.
149:
150: \todo Determine a good fix, perhaps man page changes only, fix it
151: upstream, in curent, and in 9, before removing this discussion.
152:
1.37 gdt 153: ## mountpoint conventions
154:
155: By default, datasets are mounted as /poolname/datasetname. One can
156: also set a mountpoint; see zfs(8).
157:
158: There does not appear to be any reason to choose explicit mountpoints
159: vs the default (and either using data in place or symlinking to it).
160:
1.1 gdt 161: ## mount order
162:
1.13 wiz 163: NetBSD 9 mounts other file systems and then ZFS file systems. This can
1.3 gdt 164: be a problem if /usr/pkgsrc is on ZFS and /usr/pkgsrc/distfiles is on
165: NFS. A workaround is to use noauto and do the mounts in
166: /etc/rc.local.
167:
1.27 gdt 168: NetBSD current after 20200301 mounts ZFS first. The same issues and
169: workarounds apply in different circumstances.
1.1 gdt 170:
1.19 gdt 171: ## NFS
172:
1.31 gdt 173: zfs filesystems can be exported via NFS, simply by placing them in
174: /etc/exports like any other filesystem.
1.27 gdt 175:
1.31 gdt 176: The "zfs share" command adds a line for each filesystem with the
177: sharenfs property set to /etc/zfs/exports, and "zfs unshare" removes
1.33 gdt 178: it. This file is ignored on NetBSD-9 and current before 20210216; on
179: current after 20210216 those filesystems should be exported (assuming
180: NFS is enabled). It does not appear to be possible to set options
181: like maproot and network restrictions via this method.
1.31 gdt 182:
1.33 gdt 183: On current before 20210216, a remote mkdir of a filesystem mounted via
1.31 gdt 184: -maproot=0:10 causes a kernel NULL pointer dereference. This is now
1.33 gdt 185: fixed.
186:
1.14 gdt 187: ## zvol
188:
189: Within a ZFS pool, the standard approach is to have file systems, but
190: one can also create a zvol, which is a block device of a certain size.
191:
1.38 gdt 192: As an example, "zfs create -V 16G tank0/xen-netbsd-9-amd64" creates a
193: zvol (intended to be a virtual disk for a domU).
194:
195: The zvol in the example will appear as
196: /dev/zvol/rdsk/tank0/xen-netbsd-9-amd64 and
197: /dev/zvol/dsk/tank0/xen-netbsd-9-amd64 and can be used like a
198: disklabel partition or wedge. However, the system will not read
199: disklabels and gpt labels from a zvol.
200:
201: Doing "swapctl -a" on a zvol device node fails. \todo Is it really
1.39 gdt 202: true that NetBSD can't swap on a zvol? (When using a zvol for swap,
203: standard advice is to avoid the "-s" option which avoids reserving the
204: allocated space. Standard advice is also to consider using a
205: dedicated pool.)
1.14 gdt 206:
207: \todo Explain that one can export a zvol via iscsi.
208:
1.38 gdt 209: One can use ccd to create a normal-looking disk from a zvol. This
210: allows reading a GPT label from the zvol, which is useful in case the
211: zvol had been exported via iscsi and some other system created a
212: label.
1.14 gdt 213:
1.1 gdt 214: # Memory usage
215:
216: Basically, ZFS uses lots of memory and most people run it on systems
217: with large amounts of memory. NetBSD works well on systems with
218: comparatively small amounts of memory. So a natural question is how
1.27 gdt 219: well ZFS works on one's VAX with 2M of RAM :-) More seriously, one
220: might ask if it is reasonable to run ZFS on a RPI3 with 1G of RAM, or
221: if it is reasonable on a system with 4G.
222:
223: The prevailing wisdom is more or less that ZFS consumes 1G plus 1G per
224: 1T of disk. 32-bit architectures are viewed as too small to run ZFS.
225:
226: Besides RAM, zfs requires that architecture kernel stack size is at
227: least 12KB or more -- some operations cause stack overflow with 8KB
228: kernel stack. On NetBSD, the architectures with 16KB kernel stack are
229: amd64, sparc64, powerpc, and experimental ia64, hppa. mac68k and sh3
230: have 12KB kernel stack. All others use only 8KB stack, which is not
231: enough to run zfs.
232:
233: NetBSD has many statistics provided via sysctl; see "sysctl
234: kstat.zfs".
235:
236: FreeBSD has tunables that NetBSD does not seem to have, described in
237: [FreeBSD Handbook ZFS Advanced
238: section](https://docs.freebsd.org/en/books/handbook/zfs/#zfs-advanced).
1.1 gdt 239:
1.27 gdt 240: # Interoperability with other systems
1.1 gdt 241:
1.27 gdt 242: Modern ZFS uses pool version 5000 and feature flags.
1.1 gdt 243:
1.27 gdt 244: It is in general possible to export a pool and them import the pool on
245: some other system, as long as the other system supports all the used
246: features.
1.10 gdt 247:
1.27 gdt 248: \todo Explain how to do this and what is known to work.
1.26 wiki 249:
1.27 gdt 250: \todo Explain feature flags relationship to FreeBSD, Linux, iIllumos,
251: macOS.
1.1 gdt 252:
1.27 gdt 253: # Sources of ZFS code
1.2 gdt 254:
1.27 gdt 255: Currently, there are multiple ZFS projects and codebases:
1.1 gdt 256:
1.27 gdt 257: - [OpenZFS](http://www.open-zfs.org/wiki/Main_Page)
1.30 wiz 258: - [openzfs repository](https://github.com/openzfs/zfs)
1.27 gdt 259: - [zfsonlinux](https://zfsonlinux.org/)
260: - [OpenZFS on OS X ](https://openzfsonosx.org/) [repo](https://github.com/openzfsonosx)
261: - proprietary ZFS in Solaris (not relevant in open source)
262: - ZFS as released under the CDDL (common ancestor, now of historical interest)
1.1 gdt 263:
1.27 gdt 264: OpenZFS is a coordinating project to align open ZFS codebases. There
265: is a notion of a shared core codebase and OS-specific adaptation code.
1.1 gdt 266:
1.27 gdt 267: - [zfsonlinux relationship to OpenZFS](https://github.com/openzfs/zfs/wiki/OpenZFS-Patches)
1.28 gdt 268: - FreeBSD more or less imports code from openzfs and pushes back fixes. \todo Verify this.
269: - NetBSD has imported code from FreeBSD.
270: - The status of ZFS on macOS is unclear (2021-02).
CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb