1: # ZFS on NetBSD
2:
3: This page attempts to do two things: provide enough orientation and
4: pointers to standard ZFS documentation for NetBSD users who are new to
5: ZFS, and to describe NetBSD-specific ZFS information. It is
6: emphatically not a tutorial or an introduction to ZFS.
7:
8: Many things are marked with \todo because they need a better
9: explanation, and some have question marks
10:
11: # Documentation Pointers
12:
13: See the man pages for zfs(8) and zpool(8).
14:
15: - [OpenZFS Documentation](https://openzfs.github.io/openzfs-docs/)
16: - [OpenZFS admin docs index page](https://github.com/openzfs/zfs/wiki/Admin-Documentation)
17: - [FreeBSD Handbook ZFS Chapter](https://www.freebsd.org/doc/handbook/zfs.html)
18: - [Oracle ZFS Administration Manual](https://docs.oracle.com/cd/E26505_01/html/E37384/index.html)
19: - [Wikipedia](https://en.wikipedia.org/wiki/ZFS)
20:
21: # Status of ZFS in NetBSD
22:
23: ## NetBSD 8
24:
25: NetBSD 8 has an old version of ZFS, and it is not recommended for use
26: at all. There is no evidence that anyone is interested in helping
27: with ZFS on 8. Those wishing to use ZFS on NetBSD 8 should therefore
28: update to NetBSD 9.
29:
30: ## NetBSD 9
31:
32: NetBSD-9 has ZFS that is considered to work well. There have been
33: fixes since 9.0_RELEASE. As always, people running NetBSD 9 are
34: likely best served by the most recent version of the netbsd-9 stable
35: branch. As of 2021-02, ZFS in the NetBSD 9.1 release is very close to
36: netbsd-9.
37:
38: ### Native blocksize
39:
40: ZFS attempts to find out the native blocksize for a disk when using it
41: in a pool; this is almost always 512 or 4096. Somewhere between 9.0
42: and 9.1, at least some disks on some controllers that used to report
43: 512 now report 4096. This provokes a blocksize mismatch warning.
44:
45: Given that the native blocksize of the disk didn't change, and things
46: seemed OK using the 512 emulated blocks, the warning is likely not
47: critical. However, it is also likely that rebuilding the pool with
48: the 4096 blocksize is likely to result in better behavior because ZFS
49: will only try to do 4096-byte writes. \todo Verify this and find the
50: actual change and explain better.
51:
52: ## NetBSD-current
53:
54: NetBSD-current (as of 2021-02) has similar ZFS code to 9.
55:
56: There is initial support for [[ZFS root|wiki/RootOnZFS]], via booting from
57: ffs and pivoting.
58:
59: ## NetBSD/xen special issues
60:
61: In NetBSD-9, MAXPHYS is 64KB in most places, but because of xbd(4) it
62: is set to 32KB for XEN kernels. Thus the standard zfs kernel modules
63: do not work under xen. In NetBSD-current, xbd(4) supports 64 KB
64: MAXPHYS and this is no longer an issue.
65:
66: Xen and zfs on current are reported to work well together, as of 2021-02.
67:
68: ## Architectures
69:
70: Most people seem to be using amd64.
71:
72: To build zfs, one puts MKZFS=yes in mk.conf. This is default on amd64
73: and aarch64 on netbsd-9. In current, it is also default on sparc64.
74:
75: More or less, zfs can be enabled on an architecture when it is known
76: to build and run reliably. (Of course, users are welcome to build it
77: and report.)
78:
79: # Quick Start
80:
81: See the [FreeBSD Quickstart
82: Guide](https://www.freebsd.org/doc/handbook/zfs-quickstart.html); only
83: the first item is NetBSD specific.
84:
85: - Put zfs=YES in rc.conf.
86:
87: - Create a pool as "zpool create pool1 /dev/dk0".
88:
89: - df and see /pool1
90:
91: - Create a filesystem mounted on /n0 as "zfs create -o
92: mountpoint=/n0 pool1/n0".
93:
94: - Go back and read the documentation and start over.
95:
96: # NetBSD-specific information
97:
98: ## rc.conf
99:
100: The main configuration is to put zfs=YES in rc.conf, so that the rc.d
101: scripts bring up ZFS and mount ZFS file systems.
102:
103: ## pool locations
104:
105: One can add disks or parts of disks into pools. Methods of specifying
106: areas to be included include:
107:
108: - entire disks (e.g., /dev/wd0d on amd64, or /dev/wd0 which has the same major/minor)
109: - disklabel partitions (e.g., /dev/sd0e)
110: - wedges (e.g., /dev/dk0)
111:
112: Information about created or imported pools is stored in
113: /etc/zfs/zpool.cache.
114:
115: ## pool importing problems
116:
117: While one can "zpool pool0 /dev/wd0f" and have a working pool, this
118: pool cannot be exported and imported straigthforwardly. "zpool
119: export" works fine, and deletes zpool.cache.
120:
121: "zpool import", however, only looks at entire disks (e.g. /dev/wd0),
122: and might look at slices (e.g. /dev/dk0). It does not look at
123: partitions like /dev/wd0f, and there is no way on the command line to
124: ask that specific devices be examined. Thus, export/import fails for
125: pools with disklabel partitions.
126:
127: One can make wd0 be a link to wd0f temporarily, and the pool will then
128: be importable. However, "wd0" is stored in zpool.cache and on the
129: next boot that will attempt to be used. This is obviously not a good
130: approach.
131:
132: One an mkdir e.g. /etc/zfs/pool0 and in it have a symlink to
133: /dev/wd0f. Then, zpool import -d /etc/zfs/pool0 will scan
134: /etc/zfs/pool0/wd0f and succeed. The resulting zpool.cache will have
135: that path, but having symlinks in /etc/zfs/POOLNAME seems acceptable.
136:
137: \todo Determine a good fix, perhaps man page changes only, fix it
138: upstream, in curent, and in 9, before removing this discussion.
139:
140: ## mount order
141:
142: NetBSD 9 mounts other file systems and then ZFS file systems. This can
143: be a problem if /usr/pkgsrc is on ZFS and /usr/pkgsrc/distfiles is on
144: NFS. A workaround is to use noauto and do the mounts in
145: /etc/rc.local.
146:
147: NetBSD current after 20200301 mounts ZFS first. The same issues and
148: workarounds apply in different circumstances.
149:
150: ## NFS
151:
152: zfs filesystems can be exported via NFS, simply by placing them in
153: /etc/exports like any other filesystem.
154:
155: The "zfs share" command adds a line for each filesystem with the
156: sharenfs property set to /etc/zfs/exports, and "zfs unshare" removes
157: it. This file is ignored on NetBSD-9 and current before 20210216; on
158: current after 20210216 those filesystems should be exported (assuming
159: NFS is enabled). It does not appear to be possible to set options
160: like maproot and network restrictions via this method.
161:
162: On current before 20210216, a remote mkdir of a filesystem mounted via
163: -maproot=0:10 causes a kernel NULL pointer dereference. This is now
164: fixed.
165:
166:
167: ## zvol
168:
169: Within a ZFS pool, the standard approach is to have file systems, but
170: one can also create a zvol, which is a block device of a certain size.
171:
172: \todo The zvol will appear as /dev/???? and can be used in many
173: respects like a slice. However, the system will not read disklabels
174: and gpt labels from a zvol; in this respect it is more like a disklabel
175: partition or wedge than a disk drive.
176:
177: \todo Explain that one can export a zvol via iscsi.
178:
179: \todo Explain if one can swap on a zvol.
180:
181: \todo Explain that one can use ccd to create a normal-looking disk
182: from a zvol. This allows reading a GPT label from the zvol, which is
183: useful in case the zvol had been exported via iscsi and some other
184: system created a label.
185:
186: # Memory usage
187:
188: Basically, ZFS uses lots of memory and most people run it on systems
189: with large amounts of memory. NetBSD works well on systems with
190: comparatively small amounts of memory. So a natural question is how
191: well ZFS works on one's VAX with 2M of RAM :-) More seriously, one
192: might ask if it is reasonable to run ZFS on a RPI3 with 1G of RAM, or
193: if it is reasonable on a system with 4G.
194:
195: The prevailing wisdom is more or less that ZFS consumes 1G plus 1G per
196: 1T of disk. 32-bit architectures are viewed as too small to run ZFS.
197:
198: Besides RAM, zfs requires that architecture kernel stack size is at
199: least 12KB or more -- some operations cause stack overflow with 8KB
200: kernel stack. On NetBSD, the architectures with 16KB kernel stack are
201: amd64, sparc64, powerpc, and experimental ia64, hppa. mac68k and sh3
202: have 12KB kernel stack. All others use only 8KB stack, which is not
203: enough to run zfs.
204:
205: NetBSD has many statistics provided via sysctl; see "sysctl
206: kstat.zfs".
207:
208: FreeBSD has tunables that NetBSD does not seem to have, described in
209: [FreeBSD Handbook ZFS Advanced
210: section](https://docs.freebsd.org/en/books/handbook/zfs/#zfs-advanced).
211:
212: # Interoperability with other systems
213:
214: Modern ZFS uses pool version 5000 and feature flags.
215:
216: It is in general possible to export a pool and them import the pool on
217: some other system, as long as the other system supports all the used
218: features.
219:
220: \todo Explain how to do this and what is known to work.
221:
222: \todo Explain feature flags relationship to FreeBSD, Linux, iIllumos,
223: macOS.
224:
225: # Sources of ZFS code
226:
227: Currently, there are multiple ZFS projects and codebases:
228:
229: - [OpenZFS](http://www.open-zfs.org/wiki/Main_Page)
230: - [openzfs repository](https://github.com/openzfs/zfs)
231: - [zfsonlinux](https://zfsonlinux.org/)
232: - [OpenZFS on OS X ](https://openzfsonosx.org/) [repo](https://github.com/openzfsonosx)
233: - proprietary ZFS in Solaris (not relevant in open source)
234: - ZFS as released under the CDDL (common ancestor, now of historical interest)
235:
236: OpenZFS is a coordinating project to align open ZFS codebases. There
237: is a notion of a shared core codebase and OS-specific adaptation code.
238:
239: - [zfsonlinux relationship to OpenZFS](https://github.com/openzfs/zfs/wiki/OpenZFS-Patches)
240: - FreeBSD more or less imports code from openzfs and pushes back fixes. \todo Verify this.
241: - NetBSD has imported code from FreeBSD.
242: - The status of ZFS on macOS is unclear (2021-02).
CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb