There are various techniques for upgrading packages either by using pre-built binary package tarballs or by building new packages via pkgsrc build system. This wiki page hopefully will summarize all of the different ways this can be done, with examples and pointing you to further information.
Contents
Methods using only binary packages
pkgin
The recommended way to manage your system with binary packages is by using pkgtools/pkgin.
pkg_add pkgin
Then configure your binary repository from which you want to install packages in /usr/pkg/etc/pkgin/repositories.conf. Run 'pkgin update' to get the list of available packages. You can then install packages using 'pkgin install firefox'.
To update all installed packages, just run
pkgin update
pkgin upgrade
pkg_add -uu
pkg_add's -u option is used to update a package. Basically: it saves the package's current list of packages that depend on it (+REQUIRED_BY), installs the new package, and replaces that list of dependencies.
By using the -uu (option used twice), it will attempt to update prerequisite packages also.
See the manual page, pkg_add(1), for details.
pkg_chk -b
Use "-b -P URL" where URL is where the binary packages are (e.g. ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/i386/5.1/All/).
For example, to update any missing packages by using binary packages:
pkg_chk -b -P URL -u
Or to automatically add any missing packages using just binary packages:
pkg_chk -b -P URL -a -C pkg_chk.conf
If both -b and -P are given, no pkgsrc tree is used. If packages are on the local machine, they are scanned directly, otherwise the pkg_summary database is fetched. (Using pkg_summary for local packages is on the TODO list.)
(pkg_chk is also covered below.)
Methods that build packages from source
pkgsrc is developed and tested with a consistent source tree. While you can do partial updates, doing so crosses into undefined behavior. Thus, you should be up to date with respect to the repository, either on HEAD, a quarterly branch, or much less likely, a specific date on HEAD.
That said, it is often necessary to update a particular package to an older date to work around broken updates. This risks difficulty but can be reasonable for those who can cope.
Among the options below, the mainstream/popular options are pkg_rolling-replace, and sandbox/separate-computer, and pbulk, which have been listed in (roughly) order of both increasing setup difficulty and increasing reliability.
make update
Note that there has been little recent discussion of experiences with make update; this is a clue that few are using it, because it is extremely unlikely that people are using it without problems.
'make update', invoked in a pkgsrc package directory, will remove the package and all packages that depend on it, keeping a list of such packages. It will then attempt to rebuild and install the package and all the packages that were removed.
It is possible, and in the case of updating a package with hundreds of dependencies, arguably even likely that the process will fail at some point. One can fix problems and resume the update by typing make update in the original directory, but the system can have unusuable packages for a prolonged period of time. Thus, many people find 'make update' too dangerous, particularly for something like glib on a system using gnome.
To use binary packages if available with "make update", use "UPDATE_TARGET=bin-install". If package tarball is not available in ${PACKAGES} locally or at URLs (defined with BINPKG_SITES), it will build a package from source.
To enable manual rollback one can keep binary packages. One method is to always use 'make package', and to have "DEPENDS_TARGET=package" in /etc/mk.conf. Another is to use pkg_tarup to save packages before starting.
make replace
'make replace' builds a new package and substitutes it, without changing the packages that depend on it. This is good, because large numbers of packages are not removed in the hope they can be rebuilt. It is bad, because depending packages may find ABI breaks, because a shlib major version changed, or something else more subtle, such as changed behavior of a program invoked by another program. If there is an ABI change, the correct approach is to 'make replace' the depending package. The careful reader will note that this process can in theory require all packages that depend (recursively) on a replaced package to be replaced. See the pkg_rolling-replace section for a way to automate this process.
The "make replace" target preserves the existing +REQUIRED_BY file, uninstalls the currently installed package, installs the newly built package, reinstalls the +REQUIRED_BY file, and changes depending packages to reference the new package instead. It also marks such depending packages with the "unsafe_depends" build variable, set to YES, if the package version changes, and "unsafe_depends_strict" is set in all cases.
It can use the pkg_tarup tool to create a tarball package for the the currently installed package first, just in case there is a problem (\todo Check this; it doesn't seem to happen in 2024.)
advice to be reconsidered
\todo Explain this better and perhaps prune; gdt hasn't found any reason to set this, and not any related to make replace.
If you are an expert (and don't plan to share your packages publically), you can also use in your mk.conf:
USE_ABI_DEPENDS?=no
This is for ignoring the ABI dependency recommendations and just use the required DEPENDS.
Problems occuring during make replace
Besides ABI changes (for which pkg_rolling-replace is a good solution), make replace can fail if packages are named or split. A particularly tricky case is when package foo is installed, but in pkgsrc has been split into foo and foo-libs. In this case, make replace will try to build the new foo (while the old monolithic foo is installed). The foo package depends on foo-libs, and so pkgsrc will go to build and install foo-libs. This will fail because foo-libs will conflict with the old foo. There are multiple approaches:
- Use pkg_delete -f, and then make install. This loses dependency information. Run "pkg_admin rebuild-tree". Perhaps do make replace on depending packages.
- manually save the foo +REQUIRED_BY file, pkg_delete foo, and then make package of the new foo. Put back the +REQUIRED_BY, and pkg_admin set unsafe_depends=YES all packages in the +REQUIRED_BY.
- pkg_delete -r foo, and make package on everything you still want. Or do make update. Note that this could delete a lot.
- Automating the first option would be a useful contribution to pkgsrc.
In addition, any problem that can occur with building a package can occur with make replace. Usually, the solution is not make replace specific
pkg_chk
See all packages which need upgrading:
pkg_chk -u -q
Update packages from sources:
pkg_chk -u -s
You can set UPDATE_TARGET=package in /etc/mk.conf and specify the -b flag, so that the results of compilation work are saved for later use, and binary packages are used if they are not outdated or dependent on outdated packages.
The main problem with pkg_chk, is that it deinstalls all to-be-upgraded candidates before reinstalling then. However a failure is not entirely fatal, because the current state of packages is saved in a pkg_chk* file at the root of the pkgsrc directory. But, if new ones can't be built, it is still quite problematic.
pkg_rolling-replace
pkgtools/pkg_rolling-replace is a shell script available via pkgsrc. It makes a list of all packages that need updating, and sorts them in dependency order. Then, it invokes "make replace" on the first one, and repeats. A package needs updating if it is marked unsafe_depends or if it is marked rebuild (=YES). If pkg_rolling-replace is invoked with -u, a package needs updating if pkgtools/pkg_chk reports that the installed version differs from the source version. On error, pkg_rolling-replace exits. The user should remove all working directories and fix the reported problem. This can be tricky, but the same process that is appropriate for a make replace should be followed.
Because pkg_rolling-replace just invokes make replace, the problems of ABI changes with make replace apply to pkg_rolling-replace, and the system will be in a state which might be inconsistent while pkg_rolling-replace is executing. But, by the time pkg_rolling-replace has successfully finished, the system will be consistent because every package that has a depending package 'make replaced' out from under it will be marked unsafe_depends, and then replaced itself. This replace "rolls" up the dependency tree because pkg_rolling-replace sorts the packages by dependency and replaces the earliest needing-rebuild package first.
Also, some "make replace" operations might fail due to new packages having conflicts with old packages (newly split packages, moving files between packages, etc.). These need the same manual intervention.
See the pkg_rolling-replace man page (installed by the pkg) for further details. Note that it asks that problems with pkg_rolling-replace itself be separated from problems with make replace operations that pkg_rolling-replace chose to do (when the choice was reasonable), and specifically that underlying package build failures not be reported as pkg_rolling-replace problems.
Example
As an example of running pkg_rolling-replace and excluding packages marked not for deletion, perhaps for separate manual updating:
cd /var/db/pkg
find . -name "+PRESERVE" | awk -F/ '{print $2}'
Update everything except the packages above:
pkg_rolling-replace -rsuvX bmake,bootstrap-mk-files,pax,pkg_install
(Experience does not suggest that this is necessary; pkg_rolling-replace without these exclusions has not been reported to be problematic. And if so, the problem is almost certainly an underlying issue with the specific package.)
Real-world experience with pkg_rolling-replace
Even if a lot of packages need to be updated, make replace usually works very well if the interval from the last 'pkg_rolling-replace -u' run is not that long (a month or so). With a longer interval, like a year or two, the odds of package renaming/splitting are higher. Still, for those who can resolve the issues, this is a fairly low-pain and reliable way to update.
Delete everything
If you don't have a production environment or don't care if your packages will be missing for a while, you can just delete everything and reinstall.
This method is the easiest:
# pkg_delete -Rr '*-*'
-or-
# pkg_delete -ff '*-*'
This expands to all packages, and deletes all packages without caring about dependencies. The second version of the command should be faster, as it does not perform any dependency recursion. (The quotes around the wildcards are so it doesn't get expanded by the shell first.)
Here is one idea (from posting on pkgsrc-users):
Get a list of packages installed:
# pkg_info -Q PKGPATH -a > pkgs_i_want_to_have
Remove all the packages:
# pkg_info -a | sed 's/ .*//' | tail -r | while read p ; do pkg_delete $p ; done
(There are many ways to do this.)
Then edit your "pkgs_i_want_to_have" (created above) as needed. And reinstall just those from it:
# cat pkgs_i_want_to_have | (while read pp ; do cd /usr/pkgsrc/$pp ; make && make install ; done)
An alternative way to choose the packages you want installed is to create your own custom meta-package. A meta-package doesn't install any files itself, but just depends on other packages (usually within a similar topic or need). Have a look at pkgsrc/meta-pkgs category for various examples. If your new meta-package is generic enough and useful for others, please be sure to share it.
different computer
One can use another computer (or a VM) and build packages on it, and then e.g. use pkgin. That computer can use any of the methods, with the benefit that until you have a complete set of packages (relative to what you need), you can refrain from changing the operational systems.
chroot environment
This is basically the same as using another computer, except that a chroot is lighter weight than a VM.
Manually setup a directory containing your base operating system (including compilers, libraries, shells, etc). Put a copy of your pkgsrc tree and distfiles into there or use mount to mount shared directories containing these. Then use the "chroot" command to chroot into that new directory. You can even switch users from root to a regular user in the new environment.
Then build and remove packages as you wish with out affecting your real production system. Be sure to create packages for everything.
Then use other technique from this list to install from these packages (built in chroot).
Or instead of using this manual method, use pkgtools/mksandbox, or (older) pkg_comp's chroot.
pbulk
See pkgtools/pbulk. This is the standard approach for building packages in bulk. It can be configured to build only packages you want, rather than all of them; building all of them can take between most of 24h, if you have an enormously powerful machine, to 6 months, if you are retrocomputing.
pkg_comp
Apart from the examples in the man page, it's necessary supply a list of packages you want to build. The command 'pkg_info -u -Q PKGPATH' will produce a list of packages you explicitly requested be installed; in some strong sense, it's what you want to rebuild.
After you've built new new packages, you need the list of files to reinstall. Assume that you saved the output of 'pkg_info -u -Q PKGPATH' in /etc/pkgs. The following script will produce the name of the binary packages:
while read x
do
cd /usr/pkgsrc/$x && make show-var VARNAME=PKGNAME
done < /etc/pkgs
If you use PKGBASE instead of PKGNAME, you get the basename of the file.
See this BSDFreak Article for a nice tutorial on how to set up and use pkg_comp.
bulk build framework
NB: This section is historical and probably should be deleted; see pbulk above.
Use the scripts in pkgsrc/mk/bulk/, e.g. as pointed out in http://www.netbsd.org/Documentation/pkgsrc/binary.html#bulkbuild.
To go easy on the existing pkgsrc installation, creating a sandbox (automated chroot environment) is highly recommended here: http://www.netbsd.org/Documentation/pkgsrc/binary.html#setting-up-a-sandbox.
You can later mount the pkgsrc/packages/ via NFS wherever you want and install them like:
set PKG_PATH /mnt/packages/All && pkg_add <pkg>
Or upload them on a www-site and pkg_add http://www.site/packages/All/
wip/distbb-git - distributed bulk builds
Using wip/distbb-git you may build packages in parallel using several machines and/or chroots. Read PREFIX/share/doc/distbb/README file for the instructions.
pkgdepgraph
Look at the EXAMPLES in the man page.
TODO: how to do this using binary packages only? Put in section above.
Alternative LOCALBASE and PKG_DBDIR
Use an alternative LOCALBASE setting to install the packages under a new location and an alternate PKG_DBDIR for your alternate database of installed packages.
You can choose the PKG_DBDIR via shell environment variable or using the -K switch with any of the standard pkg_* tools.
And set the LOCALBASE and your PKG_DBDIR in your mk.conf file.
You could also simply just have a symlink from /usr/pkg to your new LOCALBASE (and /var/db/pkg to your new PKG_DBDIR) and change this when ever you are ready.