This page contains the data of all the available projects proposals as a single document to permit easy searching within the page.

Note that this page also contains an Atom/RSS feed. If you are interested in being notified about changes to the projects whenever these pages get updated, you should subscribe to this feed.

  • Contact: tech-kern
  • Duration estimate: ~1 month

Implement a fullfs, that is, a filesystem where writes fail with disk full. This is useful for testing applications and tools that often don't react well to this situation because it rarely occurs any more.

The basic fullfs would be just a layerfs layer that you can mount (like nullfs) to get a copy of an existing subtree or volume where writes are rejected with ENOSPC. This is the first thing to get running.

However, for testing it is good to have more than that, so the complete project includes the ability to control the behavior on the fly and a fullfsctl(8) binary that can be used to adjust it.

These are some things (feel free to brainstorm others) that it would be useful for fullfsctl to be able to do:

  • Turn on and off the fail state (so for example you can start up a program, let it run for a while, then have the disk appear to fill up under it)
  • Arm a "doom counter" that allows the next N writes to succeed and then switches to the fail state (to test what happens if the disk fills partway through a write or save operation)
  • Change what error it fails with (ENOSPC is the basic error, but at least EDQUOT and possibly other errors, such as NFS-related ones, are also interesting)

fullfs itself should be implented as a layerfs layer, not a whole filesystem.

fullfsctl should operate via one or more file-system-specific ioctls applied to the root directory of (or perhaps any file on) the fullfs volume.

There are many ways this could be extended further to provide for more general fault injection.

Posted Sunday night, January 21st, 2024 Tags:
  • Contact: tech-kern
  • Duration estimate: 1 month

Assess and, if appropriate, implement RFC 5927 countermeasures against IPv6 ICMP attacks on TCP. Write ATF tests for any countermeasures implemented, as well as ATF tests for the existing IPv4 countermeasures.

This project will close PR kern/35392.

The IPv4 countermeasures were previously implemented here:

Posted at lunch time on Sunday, October 1st, 2023 Tags:

Implement automatic tests with ATF for all the PAM modules under src/lib/libpam/modules.

The modules, such as pam_krb5, are not currently automatically tested, despite being security-critical, which has led to severe regressions.

Posted mid-morning Friday, September 29th, 2023 Tags:

Buildlink is a framework in pkgsrc that controls what headers and libraries are seen by a package's configure and build processes. We'd like to expand on this isolation by:

  • Hiding -L${PREFIX}/{include,lib} from CFLAGS/LDFLAGS, so that (for instance) undeclared buildlink dependencies will give the needed "not found" earlier in the package-development cycle
  • Hiding ${PREFIX}/bin from PATH, so that (for instance) undeclared build- or run-time tools will give the needed "not found" earlier in the package-development cycle


  1. Do bulk builds with the existing defaults on a few platforms (e.g., NetBSD, macOS, Illumos, Linux)
  2. Rerun the bulk builds from scratch, this time with the desired infrastructure changes
  3. Compare before and after to see which package builds break
  4. Fix them
  5. Enable the infrastructure changes by default for all users
Posted Monday afternoon, July 3rd, 2023 Tags: says: # To sanitize the environment, set PKGSRC_SETENV=${SETENV} -i. We'd like to enable this isolation by default. Steps:

  1. Do bulk builds with the existing defaults on a few platforms (e.g., NetBSD, macOS, Illumos, Linux)
  2. Rerun the bulk builds from scratch, this time with the desired infrastructure change
  3. Compare before and after to see which package builds break
  4. Fix them
  5. Enable the infrastructure change by default for all users
Posted at lunch time on Friday, May 12th, 2023 Tags:

To create large sets of binary packages, pkgsrc uses the pbulk tool.

pbulk is designed to build packages in bulk in a chroot sandbox (which may be based on an older NetBSD release, a 32-bit release on a 64-bit machine, or similar).

However, managing multiple pbulk installations on a single machine can quickly become unwieldy.

It would be nice to have a web frontend for managing pbulk, restarting builds, creating new build sandboxes based on different NetBSD versions and package sets, etc.

For an example of a typical high-performance pbulk setup used on "large" hardware, see Nia's pbulk scripts. Notably, these include chroot management and can be used to build packages for multiple NetBSD (and pkgsrc) versions simultaneously by using different "base directories".

  • A management tool should support parallel pbulk across many chroots, but should also easily allow simultaneous builds for different NetBSD versions on the same machine.
  • It should be possible to schedule builds for later, somewhat like Jenkins.
  • There should be some kind of access management system.
  • pbulk creates status directories, it would be useful to present as much data from them as possible, as well as information on system resource usage (e.g. to find any stuck processes and monitor hardware health).
  • It should be possible for users to "kick" the builds (to quickly restart them, kill "stuck" processes, and so on).
  • It should be possible to schedule builds for upload after manual review of the reports, resulting in an rsync from a local staging directory to
  • It should be usable "over a morning coffee".
  • For bonus points, it should be easily installable from pkgsrc, ideally with an RC script.
  • For bonus points, integrate well with the NetBSD base system tools (e.g. bozohttpd).
Posted late Wednesday evening, January 18th, 2023 Tags:

Currently NetBSD can be booted via UEFI firmware, but only offers the default boot loader setup so multi-boot environments are hard to create. This also causes cryptic displays in the firmware boot order menu or boot select menu, like "UEFI OS", instead of "NetBSD 10.0".

The UEFI spec offers support to configure load options, which include a path to the bootloader and a description of the operating system, see the UEFI spec. This project is to implement setting up proper load option variables at least on x86 machines booting via UEFI.

Part of the project is to find the best place to set this options up. Some integrations with sysinst might be needed, maybe sysinst is the right place to set this variables. If not, sysinst may simply be changed to use a different sub directory on the ESP for the NetBSD bootloader and the variables setup might happen elsewhere.

Currently the kernel interface to access the SetVariable() and other EFI runtime callbacks exists, but there is no userland tool to operate it.

It is not clear what the EFI path set in the variable should be, and mapping NetBSD disks/partitions to EFI path notation is not trivial.

Posted Thursday afternoon, January 12th, 2023 Tags:
  • Contact: tech-pkg
  • Mentors: unknown
  • Duration estimate: unknown

In recent years, packages whose builds download things on the fly have become an increasing problem. Such downloads violate both pkgsrc principles/rules and standard best practices: at best, they bypass integrity checks on the downloaded material, and at worst they may check out arbitrary recent changes from GitHub or other hosting sites, which in addition to being a supply-chain security risk also makes reliable repeatable builds impossible.

It has simultaneously grown more difficult to find and work around these issues by hand; these techniques have been growing increasingly popular among those who should perhaps know better, and meanwhile upstream build automation has been steadily growing more elaborate and more opaque. Consequently, we would like a way to automatically detect violations of the rules.

Currently, pbulk runs in two steps: first it scans the tree to find out what it needs to build, and once this is done it starts building. Builds progress in the usual pkgsrc way: the fetch phase comes first and downloads anything needed, then the rest of the build continues. This interleaves (expected) downloads and build operations, which on the one hand tends to give the best build throughput but on the other makes it difficult to apply network access restrictions.

The goal of this project is to set up infrastructure such that bulk builds can run without network access during the build phase. (Or, more specifically, in pkgsrc terms, everything other than the fetch phase.) Then attempts to download things on the fly will fail.

There are two ways to accomplish this and we will probably want both of them. One is to add a separate fetch step to pbulk, so it and the build step can run with different global network configuration. The other is to provide mechanisms so that expected downloads can proceed and unwanted ones cannot.

Separate fetch step

Since bulk builds are generally done on dedicated or mostly-dedicated systems (whether real or virtual) system-wide changes to the network configuration to prevent downloads during the build step will be in most setups acceptable, and also sufficient to accomplish the desired goals.

There are also two possible ways to implement a separate fetch step: as part of pbulk's own operational flow or as a separate external invocation of the pbulk components. The advantage of the former is that it requires less operator intervention, while the advantage of the latter is that it doesn't require teaching pbulk to manipulate the network state on its own. (In general it would need to turn external access off before the build step, and then restore it after in order to be able to upload the build results.) Since there are many possible ways one might manipulate the network state and the details vary between operating systems, and in some cases the operations might require privileges the pbulk process doesn't have, making pbulk do it adds considerable complication; on the other hand, setting up pbulk is already notoriously complicated and requiring additional user-written scripts to manipulate partial build states and adjust the network is a significant drawback.

Another consideration is that to avoid garbaging the failure reports, any download failures need to be recorded with pbulk during the download step, and the packages affeected marked as failed, so that those failures end up in the output results. Otherwise the build step will retry the fetch in an environment without a network, and that will then fail but nobody will be able to see why. For this reason it isn't sufficient to just, for example, run "make -k fetch" from the top level of pkgsrc before building.

Also note that rather than just iterating the package list and running "make fetch" from inside pbulk, it might be desirable to use "make fetch-list" or similar and then combine the results and feed them to a separate download tool. The simple approach can't readily adapt to available network bandwidth: depending on the build host's effective bandwidth from various download sites it might either flood the network or be horrendously slow, and no single fixed parallelism setting can avoid this. Trying to teach pbulk itself to do download load balancing is clearly the wrong idea. However, this is considerably more involved, if only because integrating failure results into the pbulk state is considerably more complicated.

Blocking unauthorized downloads

The other approach is to arrange things so that unwanted downloads will fail. There are a number of possible ways to arrange this on the basic assumption that the system has no network access to the outside world by default. (For example, it might have no default route, or it might have a firewall config that blocks outgoing connections.) Then some additional mechanism is introduced into the pkgsrc fetch stage so that authorized downloads can proceed. One mechanism is to set up a local HTTP proxy and only make the proxy config available to the fetch stage. Another, possibly preferable if the build is happening on a cluster of VMs or chroots, is to ssh to another local machine to download there; that allows mounting the distfiles directory read-only on the build hosts. There are probably others. Part of the goal of this project should be to select one or a small number of reasonable mechanisms and provide the necessary support in pbulk so each can be enabled in a relatively turn-key fashion. We want it to be easy to configure this restriction and ideally in the long term we'd like it to be able to default to "on".

Note that it will almost certainly be necessary to strengthen the pkgsrc infrastructure to support this. For example, conditionally passing around HTTP proxy config depending on the pkgsrc phase will require changes to pkgsrc itself. Also, while one can redirect fetch to a different tool via the FETCH_USING variable, as things stand this runs the risk of breaking packages that need to set it itself. There was at one point some talk about improving this but it apparently never went anywhere.

An additional possibility along these lines is to leverage OS-specific security frameworks to prohibit unwanted downloading. This has the advantage of not needing to disable the network in general, so it can be engaged even for ordinary builds and on non-dedicated hosts. However, because security frameworks and their state of useability vary, it isn't a general solution. In particular on NetBSD at the moment we do not have anything that can do this. (It might be possible to ues kauth for it, but if so the necessary policy modules do not currently exist.) Consequently this option is in some ways a separate project. Note if attempting it that simplistic solutions (e.g. blocking all attempts to create sockets) will probably not work adequately.

Other considerations

Note when considering this project:

  • Working on pbulk at all is not entirely trivial.
  • It's necessary to coordinate with Joerg and other stakeholders so that the changes can eventually be merged.
  • Any scheme will probably break at least a few innocent packages that will then probably need somebody to patch them.

Overall, one does hope that any package that attempts to download things and finds it can't will then fail rather than silently doing something different that we might or might not be able to detect. It is possible in the long run that we'll want to use a security framework that can log download attempts and provide an audit trail; however, first steps first.

Posted Sunday night, January 8th, 2023 Tags:

Java software like e.g. Eclipse is using SWT, the "Standard Widget Toolkit".

It would be good to have a NetBSD port. Since this is running on Linux already, it's probably mostly adapting that to NetBSD.

Posted at lunch time on Wednesday, September 28th, 2022 Tags:
  • Contact: nia
  • Mentors: nia
  • Duration estimate: 350h

NetBSD includes various simple, command-line audio tools by default, such as audioplay(1), audiorecord(1), mixerctl(1), aiomixer(1), audiocfg(1)...

These tools are useful because they provide almost everything a user needs to test basic functionality of their audio hardware. They are critically important for basic diagnostics.

It would be nice to have a tool to easily visualize audio input using a simple Curses interface. Some ideas for its possible functionality:

  • Display basic live-updating frequency graph using bars
  • Display channels separately
  • 'Echo' option (play back audio as it is input)
  • pad(4) support (NetBSD has support for 'virtual' audio devices. This is useful because you can record the output of an application by having it output to the audio device that opening /dev/pad creates. This can also 'echo' by outputting the data read from the pad device.)

You need NetBSD installed on physical hardware (older laptops work well and are cheaply available) and a microphone for this project. Applicants should be familiar with the C programming language.

Posted early Wednesday morning, May 11th, 2022 Tags:
  • Contact: nia
  • Mentors: nia
  • Duration estimate: 350h

pkgsrc is NetBSD's native package building system. It's also used on other platforms, such as illumos. It includes numerous graphical environments, including Xfce, MATE, and LXQt, but support for Enlightenment has since bitrotted and been largely removed. Support for its related fork Moksha is missing entirely.

Enlightenment is partiuclarly interesting for NetBSD because it's lightweight, BSD licensed, and suitable for mobile applications. We're not sure about the benefits of Moksha over Enlightenment proper, but it's worth investigating.

Since Enlightenment is written in C, the applicant should ideally have a basic understanding of C and Unix system APIs. In order for the port not to bit-rot in the future, it should be done well, with patches integrated upstream where possible. They should have a laptop with NetBSD installed (older laptops are likely more representative of typical NetBSD uses and can be picked up cheap from local auctions sites).

Integrating Enlightenment into pkgsrc will require a knowledge of build systems and make (pkgsrc in particuar is built on top of BSD make).


  • A basic port enables basic Enlightenment installation on NetBSD when installed from pkgsrc.
  • A more advanced and ideal port has tight integration with NetBSD system APIs, supporting features like native audio mixing and reading from sensors.
  • For extra brownie points, the pkgsrc package should work on illumos too.
Posted early Wednesday morning, May 11th, 2022 Tags:
  • Contact: nia
  • Mentors: nia
  • Duration estimate: 175h

A core component of NetBSD is the 'xsrc' repository, which contains a 'classic' distribution of X11, all related programs, and libraries, as found on Unix systems from times of yore.

xsrc uses the NetBSD build system and only BSD make to build, which means it builds extremely quickly, with minimal dependencies, and is easy to cross-compile. It currently includes an implementation of the OpenGL graphics API (Mesa), but not an implementation of the next-generation Vulkan graphics API, or OpenCL, the GPU-accelerated compute API, which can also be obtained from Mesa.

Most of modern X.Org is built with Meson and Python, so some level of translation is required to integrate new components.

This project involves making modifications to the Mesa Vulkan and OpenCL libraries in order to allow them to work on NetBSD (this part requires basic knowledge of the C programming language and Unix APIs), ideally submitting them upstream, until Vulkan and OpenCL support can be built on NetBSD, and then integrating the relevant components into the NetBSD build system using only BSD Make.

The candidate should ideally have some knowledge of the C programming language and build systems.

Posted early Wednesday morning, May 11th, 2022 Tags:
  • Contact: nia
  • Mentors: nia
  • Duration estimate: 350h

pkgsrc is NetBSD's native package building system It includes numerous graphical environments, including Xfce, MATE, and LXQt, but many have limited support for native NetBSD system APIs, e.g. support for reading battery levels, and audio volume.

We really would like better desktop environment integeration, and this requires some work on the upstream projects in C and in some cases C++.

An applicant should have basic familiarity with build systems, make, and C. They should be good at carefully reading documentation, as much of this stuff is documented in manual pages like audio(4) and envsys(4). They should have a laptop with NetBSD installed (older laptops are likely more representative of typical NetBSD uses and can be picked up cheap from local auctions sites).

They should be able to investigate the current level of support in various third-party projects and identify priority targets where native code for NetBSD can be written.

Nia is very experienced in writing native code for NetBSD audio and sensors and would be happy to answer questions.

As the project continues, we might even be able to start porting more applications and applets.

Posted early Wednesday morning, May 11th, 2022 Tags:

The NetBSD Wi-Fi stack is being reworked to support newer protocols, higher speeds, and fine-grained locking using code from FreeBSD. As part of this work, all existing NetBSD Wi-Fi drivers need to be reworked to the new Wi-Fi code base.

Successful completion of this project requires you to have access to hardware that is already supported by NetBSD but not yet converted. See the ?Driver state matrix for a list of devices to convert. Many older devices can be found cheap on sites like eBay.

When applying for this project, please note which driver(s) you want to convert.

Posted late Saturday afternoon, November 27th, 2021 Tags:

Currently resize_ffs(8) does not work with FFSv2 file systems.

This is a significant problem, since we currently rely on resize_ffs to provide live images for ARM, and FFSv1 is lacking in various ways (e.g. 2038-limited timestamps and maximum disk sizes of 1TB).

Posted late Thursday afternoon, November 18th, 2021 Tags:

We are currently not distributing official TNF binary packages with embedded signature. The pkgsrc infrastructure seems to be mostly there, but there are loose ends and this makes NetBSD fall back behind other pkgsrc users where everything needed comes with the bootstrap kit.

There have been various related experiments and discussions in the past, and the responsible persons are willing to change it now (that is: ideally have all binary pkgs for NetBSD 10 signed and verified already).

This project is about fixing the loose ends.

Intended user workflow

  • the user installs a new system
  • at the end of the sysinst installation the config page offers a binary pkgs setup
  • the user selects a repository (with a working default) and sysinst triggers all necessary configuration and installation actions (this may involve downloads and running fixup scripts, but may not require manual intervention)
  • after a reboot of the new machine, binary pkgs can be directly added and will be automatically verified (e.g.: "pkg_add firefox" or "pkg_add xfce4" will work)

Implementation details

The following drafts a possible pkgsrc/pkgbuilders/releng workflow and assumes x509 signing. This is just to make this project description clearer, the project does not require a x509 based solution.

Operational workflow for pkg creation

  • Thomas Klausner (wiz) is the keeper of the pkgsrc master CA key. He creates intermediate CA keys for every developer in charge of some pkgbuilding machines, signs them with the master CA key and distributes them.
  • The public part of the master CA certificate becomes part of the NetBSD release and is available as a trust anchor.
  • Every developer in charge of some pkgbuild machines creates a signing key (without passphrase) from their intermediate CA key and installs it on the individual pkg build machine

Main point of the whole process is that NetBSD and pkgsrc have different release cycles, and pkg building machines come and go. We do not want a fixed set of allowed machine signing keys distributed with a (long living) NetBSD release, but we do not want to just trust whatever the binary pkg repository offers, so there needs to be proper automatic validation of all keys used for a repository against some trust anchor provided with the base system. With the current size of the project it might be manageable to have all finally used signing keys signed directly by the pkgsrc master key, but a design that allows an interim step where individual signing keys could be created by the developers in charge of the machines would be preferable.

Deliverables for this project

  1. all required changes (if any) for the pkgtools and pkgsrc makefiles, or any new tools/scripts (either as a set of patches or commited).

  2. a description of the overall workflow, e.g. as a wiki page or as part of the web site.

  3. concrete instructions for the various parties involved in the deployment:

    • pkgsrc master key/cert handling (Thomas)
    • releng: how to make the trust anchor part of the release and what needs to be configured/done by sysinst
      • globally
      • post pkg repository selections
    • pkg build administrators: how to create signing keys and how to configure the pkgbuild machines

And of course all this needs to be tested upfront.


If this project succeeds and does not use x509, propose removal of the bit rotted and not fully functional x509 support from pkg tools and the pkgsrc infrastructure.

Setup tried so far and how it fails

Thomas (wiz@) provided the certificate for the TNF CA, intended to be used to verify all signed binary pkgs. When everything works, this key should be part of the base system.

Thomas also created a cert+key for the test setup, signed by the TNF CA key, intended to be used to sign binary pkgs on a single pkg build setup.

The instructions for these two steps are in pkgsrc/pkgtools/pkg_install/files/x509/signing.txt - a script and a config file are in the same directory.

On the build machine, the setup is simple:

  • store the keys for example in /etc/pkg-certs. The names used below are 00.pem for the TNF CA cert and pkgkey_cert for the individual builder certificate and pkgkey_key.pem for the corresponding key (which needs to have no passphrase)
  • Add to /etc/mk.conf (or the equivalent in the bulk build tree)

     # signed binary pkgs, see
  • Add to /etc/pkg_install.conf


Then create a single pkg, like:

cd /usr/pkgsrc/pkgtools/digest
make package
make install

At the end of make package you should see successful signing of the binary pkg. But the make install will fail to verify the certificate.

Note: a key point of the whole setup is to avoid having to add the content of pkgkey_cert.pem to 00.pem (or similar). We want to avoid having to distribute many (changing) keys of build machines with the base system.

An alternative solution would make the key distribution part of the initial setup (e.g. download from a fixed relative path when selecting a pkg repository URL), but no off-the-shelf tools for that currently exist.

Posted late Wednesday morning, October 27th, 2021 Tags:

The current UI of pkgsrc MESSAGE as a couple of drawbacks:

  • When installing a lot of packages via pkg_add or pkgin it is often get lost
  • When updating packages via pkg_add or pkgin - also if the MESSAGE is not changed - it is printed anyway

For possible inspirations please look at OpenBSD ports' pkg-readmes and/or other package systems.

Posted mid-morning Monday, September 6th, 2021 Tags:

NetBSD has large amounts of documentation in XML DocBook: the Guide, the pkgsrc Guide, large parts of the website.

asciidoc is a much nicer, friendlier format than XML. It preserves the same advantages as DocBook: easy conversion to PDF and other formats for book distribution, and rich semantics.

There is a tool for converting DocBook to asciidoc, but it is likely not perfect, and output will require manual adjustment:

asciidoc itself can also output DocBook. We might leverage this so we can convert the website step-by-step, and keep the existing templates.

This project will require careful adjustment of build systems in the htdocs repository and those used by the pkgsrc guide.

A working proposal should demonstrate one page (of the website, pkgsrc guide, or NetBSD guide), converted to asciidoc and hooked up to the build system, integrated, with reasonable looking output.

Posted at lunch time on Sunday, May 2nd, 2021 Tags:
  • Contact: tech-kern
  • Funded by: ($400 expires 1/July/2021)

The current xhci driver requires contiguous allocations, and with higher uptime, NetBSD's kernel memory becomes more and more fragmented.
Eventually, large contiguous allocations aren't possible, resulting in random USB driver failures.
This feature will improve user experience of NetBSD.

Posted late Thursday evening, March 25th, 2021 Tags:

When a system needs more memory but has free disk space it could auto create swap files and then delete them later.

The ideal solution would be configurable for:

  • thresholds for creation
  • min/max (don't fill up the disk entirely)
  • encryption settings

The "listener" for the file creation should avoid thrashing, have timeouts, and handle disk space usage sanely.

Posted Tuesday evening, October 13th, 2020 Tags:

OpenLDAP already has a SASL back-end for CYRUS-SASL.
In NetBSD, we have our own SASL-C library which has similar functionality and can be used in OpenLDAP instead of CYRUS.
Base postfix already does this.

There is a cyrus.c file where all the work is done.
We can make a saslc.c one that uses our library.
This will allow different authentication schemes to be used for the client programs (so we will be able to run ldapsearch against an Active Directory server using GSSAPI.

Posted at lunch time on Sunday, August 11th, 2019 Tags:

NetBSD has an extensive test suite that tests native kernel and userland code.

Mounting the root file system is one of the last steps the kernel does during boot before starting the first process (init(8)).

Root file system selection is not covered by the current test suite.

How to find the root file system is specfied in the kernel configuration file. E.g.:

config netbsd root on ? type ?  
config netbsd root on sd0a type ?  

The first is a wildcard specification which causes the kernel to look for the root file system on the device that the kernel was booted from. The second form specifies the device and partition that contains the root file system. Other forms are also possible.

The selection process is a complex interaction between various global variables that get initialized from the kernel configuration file and by machine specific code that processes the information passed by the bootloader about where the kernel was loaded from.

This selection process is performed mostly by a function named setroot in the file sys/kern/kern_subr.c.

The project could be completed in a number of steps:

  • Document the existing use cases and config ... syntax.
  • Document the processing steps and functions called by setroot.
  • Document how the various global variables interact.
  • Write unit tests using rumpservers for the ATF framework for the documented use cases.

The project would initially be focussed on x86 (amd64 and i386).

Posted Monday night, January 21st, 2019 Tags:

clang-format is a tool to format source code according to a set of rules and heuristics. Like most tools, it is not perfect nor covers every single case, but it is good enough to be helpful.

clang-format can be used for several purposes:

  • Quickly reformat a block of code to the NetBSD (KNF) style.
  • Spot style mistakes, typos and possible improvements in files.
  • Help to follow the coding style rules.


  • Create configuration file .clang-format that approximate the NetBSD coding style
  • Patch LibFormat to handle missing coding style rules.
  • Integrate .clang-format with the NetBSD distribution.
Posted late Friday afternoon, January 18th, 2019 Tags:

racoon(8) is the current IKEv1 implementation used in NetBSD. The racoon code is old and crufty and full of potential security issues. We would like to replace it. There are other implementations available, such as StrongSwan, openiked/isakmpd, racoon2.

This project has two stages:

  • Evaluate all 3 (or more) solutions, describe and document their pros and cons, and then settle into one of them.

  • Port it to NetBSD to replace racoon.

I have started working on that for racoon2 on (see the TODO file), and also have a build glue for NetBSD for it and it works. I've also gotten openiked to compile (but not work).

Posted at teatime on Friday, January 18th, 2019 Tags:

the urtwn and rtwn have a lot of duplicate code.
Merging them will improve both.

This project is on hold due to the conversion project needing to be completed first.

Posted at teatime on Friday, January 18th, 2019 Tags:

NetBSD has the capability to run binaries compiled for Linux under compat_linux. This is a thin in-kernel translation layer that implements the same ABI as the Linux kernel, translating Linux system calls to NetBSD ones.

Not all Linux syscalls are implemented. This means some programs cannot run.

This project is about identifying critical missing syscalls and adding support for them.

In the course of this project, you should find at least one Linux binary that does not yet run on NetBSD using compat_linux to use as a test case (your mentor may have suggestions), trace the program to find the missing features it requires, make note of them, and begin implementing them in NetBSD's Linux compatibility layer.

Posted Friday afternoon, January 18th, 2019 Tags:

VMWare provides an emulator that could use graphical acceleration, but doesn't on NetBSD.
A DRM driver exists for linux which could be adapted, like other DRM drivers that were ported.

Posted Friday afternoon, January 18th, 2019 Tags:
  • Contact: port-arm
  • Duration estimate: 3-6 months

Android is an extremely popular platform, with good software support.

NetBSD has some COMPAT_LINUX support, and it might be possible to leverage this to run Android applications.
This is only done for GNU/Linux platforms right now (SUSE / Debian).

We need to start with Android x86, as COMPAT_LINUX for x86 already exists and is known to work.

As this is a difficult project, the project will need adjustments with time.

  • Create an anbox chroot on linux/x86, experiment with running it with NetBSD.
  • Experiment with running simplest Android program
  • Implement missing syscall emulation as needed
  • ??? (gap for difficulties we will find from this)
  • Package anbox-chroot in pkgsrc


Posted Friday afternoon, January 18th, 2019 Tags:
  • Contact: tech-net
  • Duration estimate: 175h

Access to some hardware registers and other things can only be done by one CPU at a time.
An easy way to do this is to make the entire network stack runs with a single lock held, so operations only take place on one core.
This is inefficient, if you ever want to use more than one core, for faster performing cards.

Adapting old drivers to be able to run with the rest of the network stack not having this lock will improve NetBSD networking.
A large number of drivers must be adapted, and some of them can be emulated from virtual machines too, some examples:

  • Realtek RTL8139 Gigabit Ethernet re(4) (supported by QEMU)
  • AMD PCnet pcn(4) (supported by QEMU and VMware)
  • Novell NE1000 ne(4) (supported by QEMU)
  • Atheros/Killer Gigabit Ethernet alc(4)
  • Attansic Gigabit Ethernet ale(4)
  • Broadcom Gigabit Ethernet bge(4)
  • Broadcom NetXtreme bnx(4)

You may find others listed in pci(4). It is possible you have a computing device with a device for which the driver hasn't been converted yet.

The file src/doc/TODO.smpnet in the NetBSD source tree contains a list of fully converted drivers that you may use an an example, as well as some general guidelines.

When applying for this project, please note which driver you would like to work on.

Posted Friday afternoon, January 18th, 2019 Tags:

Raspberry Pi is a very popular ARM board.

It has a modern graphical driver, VC4.

NetBSD already supports several DRM drivers (from Linux 4.4), living in sys/external/bsd/drm2. Adapting this one will make Raspberry Pi work better out of the box.

While this project requires hardware, we can help with supplying a Raspberry Pi if needed.

Milestones for this project:

  • VC4 driver builds as part of netbsd source tree (no hardware access needed)
  • Adjust device tree configuration so VC4 driver is used
  • Iron out bugs that appear from running it
Posted Friday afternoon, January 18th, 2019 Tags:

The iscsictl(1) program manages the iSCSI instances on the local computer. It communicates with the iscsid(8) daemon to send queries using iSCSI protocol.

Possible enhancements:

  • Review of iscsictl(1) manpage. For instance, the command add_target has no description, [target-opts] could be refered to "Target Options".
  • Add a mode to iscsictl(1) program to log sessions at boot. It could be a batch command (the name could be discussed) that read a /etc/iscsi.conf file. Some parts of the iscsictl(1) from FreeBSD could be ported.
  • Implement the find_isns_servers.

The iscsi-target(8) server allows to setup iSCSI targets on a NetBSD host and to present block storage to the network. It can be used to test the iscsictl(1) implementation.

Posted late Monday evening, January 14th, 2019 Tags:

Improvements that can be done to NPF with reference to WIP code/descriptions:

  • Import thmap, needed for newer NPF
  • WIP dynamic NAT address and NETMAP
  • Use of "any"
    map $ext_if dynamic any -> $ext_v4 pass family inet4 from $int_net.
    needs a few syntactic fixes/wrappers (e.g. loading can just handle "any" here, since it's actually correct, just merely not supported by the syntax; you can replace it with, though)
  • traffic redirection, 1 2 I think it just needs IPv6 handling and testing
Posted Monday afternoon, December 10th, 2018 Tags:

The Citrus local code in NetBSD's libc stores the locale data and code in shared object files. Dynamic linking is used to load the code by name using dlopen(3). The static libc build (libc.a) can't use dlopen(3) to open locales and it only supports the C locale which is hard-coded.

This project is about adding make(1) rules to compile all the locales in libc and code to select each one by name. The same technique is used in libpam (libpam.a).

Posted late Sunday night, November 12th, 2018 Tags:

NetBSD has an extensive test suite that tests native kernel and userland code. NetBSD can run Linux binaries under emulation (notably on x86, but other platforms such as ARM have some support too). The Linux emulation is not covered by the test suite. It should be possible to run an appropriate subset of the tests when compiled as Linux binaries.

The project could be completed in a number of steps:

  • Determine tests that make sense to run under Linux emulation (e.g. syscalls)
  • Compile tests on Linux and then run on NetBSD
  • Add new/altered tests for Linux-specific APIs or features
  • Build cross-compilation environment to build Linux binaries on NetBSD, to make the test-suite self-hosting
  • Fix Linux emulation for tests that fail
  • Use tests to add Linux emulation for syscalls missing (e.g.timer_*)

It may also be instructive to look at the Linux Test Project.

The project would initially be focussed on x86 (amd64 and i386).

Posted at lunch time on Thursday, February 15th, 2018 Tags:

Debian's .deb packaging format is supported by mature and user-friendly packaging tools.
It's also the native packaging format in several systems, and would improve user experience on those systems.
It would be nice to generate pkgsrc packages to this format.

Prior work exists for generating packages in other formats.


  • Lookup .deb format documentation and experiment with tooling
  • Investigate differences between pkgsrc's pkg and deb format
  • Import necessary tools to generate .deb packages
  • Build many packages and look for bugs
Posted Monday night, February 12th, 2018 Tags:
  • Adapt existing ed25519 and salsa20 implementations to netpgp, netpgpverify
  • Maintain compatibility and interoperability with gpg2's usage
  • Maintain compatibility with openssh's keys
  • Extend tests to cover new algorithms

Extended goals:

  • provide a standalone netpgp signing utility, to mirror the netpgpverify verification utility
Posted Monday evening, February 12th, 2018 Tags:

Integrate the LLVM Scudo with the basesystem framework. Build and execute base programs against Scudo.


  • Ensure completeness of the toolchain in the basesystem.
  • Add a new option for building the basesystem utilities with Scudo.
  • Finish the integration and report bugs.
  • Research Scudo for pkgsrc.
Posted Saturday afternoon, January 27th, 2018 Tags:

The NetBSD sourcecode is verified by a static analyzers like Coverity. Attempt to automate the process, report and if possible fix bugs.


  • Consult and research the available tools.
  • Integrate the tools for the purposes of the NetBSD project.
  • Scan the sources, report bugs, if possible fix the problems, or file bug reports
Posted Saturday afternoon, January 27th, 2018 Tags:

There is an initial functional support for syzkaller for NetBSD (as guest). Resume the porting effort, execute and report kernel bugs.


  • Ensure completeness of the current support.
  • Execute the fuzzer and gather reports, narrow down problems, translate to C reproducers.
  • Add missing features, fix bugs in the NetBSD support.
Posted Saturday afternoon, January 27th, 2018 Tags:

Integrate the LLVM libFuzzer with the basesystem framework. Build and execute base programs against libFuzzer.


  • Ensure completeness of the toolchain in the basesystem.
  • Add a new option for building the basesystem utilities with libFuzzer.
  • Finish the integration and report bugs.
Posted Saturday afternoon, January 27th, 2018 Tags:

Add support in the pkgsrc framework for building packages with sanitizers.

Expected sanitizer options:

  • Address (ASan),
  • Memory (MSan),
  • MemoryWithOrigin (MSan with tracking the origin)
  • Undefined (UBSan),
  • Thread (TSan),
  • Address;Undefined (ASan & UBSan)
  • "" (empty string) - the default option


  • Ensure the availability of the toolchain and prebuilt userland with the sanitizers.
  • Add new option in pkgsrc to build the packages with a each sanitizer.
  • Build the packages and report problems and bugs.
Posted Saturday afternoon, January 27th, 2018 Tags:

ALTQ (ALTernate Queueing) is an optional network packet scheduler for BSD systems. It provides various queueing disciplines and other quality of service (QoS) related components required to control resource usage.

It is currently integrated in pf(4) .

Unfortunately it was written a long time ago and it suffers from a lot of code duplication, dangerous code practices and can use improvements both in the API and implementation. After these problems have been addressed it should be integrated with npf(4) .

Posted late Thursday evening, September 14th, 2017 Tags:

Subtle changes in NetBSD may carry a significant performance penalty. Having a realistic performance test for various areas will allow us to find offending commits. Ideally, it would be possible to run the same tests on other operating systems so we can identify points for improvement.

It would be good to test for specific cases as well, such as:

  • Network operation with a packet filter in use
  • Typical disk workload
  • Performance of a multi-threaded program

Insert good performance testsuite examples here

It would be nice to easily test commits between particular dates. Automated runs could be implemented with the help of misc/py-anita.

Posted late Tuesday night, March 1st, 2017 Tags:

LFS is currently carrying around both support for the historical quota format ("quota1") and the recent newer 64-bit in-volume quota format ("quota2") from FFS, but neither is actually connected up or works. In fact, neither has ever worked -- there is no reason to worry about compatibility.

Define a new on-disk quota format that shares some of the properties of both the FFS formats:

  • it should be 64-bit (like quota2);

  • in LFS there is no reason to have a tree-within-a-file like quota2; just files indexed directly by uid/gid like quota1 is fine;

  • the quota files should be hidden away like in quota2;

  • there should be explicit support for deleting quota entries like quota2;

  • some of the additional bits in quota2, like default value handling, should be retained.

Implement the quota system by folding together the existing copies of the quota1 and quota2 code and writing as little new code as possible. Note that while the "ulfs" code should have more or less complete quota support and all the necessary hooks in place, you'll have to add quota hooks to the native lfs code.

You will also need to figure out (before you start coding) what if any special steps need to be taken to make sure that the quota files on disk end up consistent as segments are written out. This may require additional logic like the existing DIROP logic, which would likely be a pain.

Add an option to newfs_lfs to enable quotas. Then add code to fsck_lfs to crosscheck and correct the usage amounts. Also add logic in tunefs_lfs to turn quotas on and off. (It's ok to require a fsck afterwards when turning quotas on to compute the current usage amounts.)

(Note that as of this writing no tunefs_lfs program exists -- that will have to be corrected too.)

Posted in the wee hours of Friday night, August 27th, 2016 Tags:
  • Contact: tech-kern
  • Duration estimate: 1-2 months

The fallocate/posix_fallocate system call, and the file-system-level VOP_FALLOCATE operation backing it, allow preallocating underlying storage for files before writing into them.

This is useful for improving the physical layout of files that are written in arbitrary order, such as files downloaded by bittorrent; it also allows failing up front if disk space runs out while saving a file.

This functionality is not currently implemented for FFS; implement it. (For regular files.) There is not much to this besides running the existing block allocation logic in a loop. (Note though that as the loop might take a while for large allocations, it's important to check for signals periodically to allow the operation to be interrupted.)

Posted Friday evening, August 26th, 2016 Tags:
  • Contact: tech-kern
  • Duration estimate: 2 months

Apply TCP-like flow control to processes accessing to the filesystem, particularly writers. That is: put them to sleep when there is "congestion", to avoid generating enormous backlogs and provide more fair allocation of disk bandwidth.

This is a nontrivial undertaking as the I/O path wasn't intended to support this type of throttling. Also, the throttle should be underneath all caching (there is nothing to be gained by throttling cached accesses) but by that point attributing I/O actions to specific processes is awkward.

It might be worthwhile to develop a scheme to tag buffers with the upper-level I/O requests their pending changes came from, or something of the sort. That would likely help with readahead processing and other things as well as with flow control.

Posted early Friday morning, August 26th, 2016 Tags:

Heavily used file systems tend to spread the blocks all over the disk, especially if free space is scarce. The traditional fix for this is to use dump, newfs, and restore to rebuild a volume; however, this is not acceptable for current disk sizes.

The resize_ffs code already contains logic for relocating blocks, as it needs to be able to do that to shrink a volume; it is likely that it would serve nicely as a starting point for a defragmenting tool.

Note that safety (not scrambling the volume if the system crashes while defragging) is somewhat tricky and critically important.

Posted early Friday morning, August 26th, 2016 Tags:
  • Contact: tech-kern
  • Duration estimate: 1-2 months

NetBSD has a fixed, kernel-wide upper limit on transfer size, called MAXPHYS, which is currently set to 64k on most ports. This is too small to perform well on modern IDE and SCSI hardware; on the other hand some devices can't do more than 64k, and in some cases are even limited to less (the Xen virtual block device for example). Software RAID will also cause requests to be split in multiple smaller requests.

This limit should be per-IO-path rather than global and should be discovered/probed as those paths are created, based on driver capability and driver properties.

Much of the work has already been done and has been committed on the tls-maxphys branch. What's needed at this point is mostly testing and probably some debugging until it's ready to merge.

This project originaly suggested instead to make the buffer queue management logic (which currently only sorts the queue, aka disksort) capable of splitting too-large buffers or aggregating small contiguous buffers in order to conform to device-level requirements.

Once the MAXPHYS changes are finalized and committed, this project may be simply outdated. However, it may also be worthwhile to pursue this idea as well, particularly the aggregation part.

Posted early Friday morning, August 26th, 2016 Tags:

Right now /usr/games/gomoku is not very smart. This has two consequences: first, it's not that hard to beat it if one tries; and second, if you put it in a spot, it takes annoyingly long to play and chews up tons of RAM. On older hardware or a RAM-limited virtual machine, it'll run out of memory and bail.

A few years ago when I (dholland) looked into this, there was a small but apparently nonempty community of AI researchers working on gomoku brains. There was also some kind of interface standard for gomoku brains such that they could be conveniently run against one another.

This project is to:

  • track down the latest version of that standard and add external brain support to our gomoku(6);
  • package a good gomoku brain in pkgsrc if there's a suitable open-source one;
  • based on available research, improve the brain built into our gomoku (doesn't have to be competitive, just not terrible);
  • also (if applicable) improve the framework the built-in brain code uses so it can back out if it runs out of memory.

Some of these issues were originally raised in

Note to passersby: "gomoku" is not the same game as "go".

Posted in the wee hours of Sunday night, August 8th, 2016 Tags:

Currently BSD make emits tons and tons of stat(2) calls in the course of figuring out what to do, most notably (but not only) when matching suffix rules. This causes measurable amounts of overhead, especially when there are a lot of files like in e.g. libc. First step is to quantify this overhead so you can tell what you're accomplishing.

Fixing this convincingly requires a multi-step approach: first, give make an abstraction layer for directories it's working with. This alone is a nontrivial undertaking because make is such a mess inside. This step should also involve sorting out various issues where files in other directories (other than the current directory) sometimes don't get checked for changes properly.

Then, implement a cache under that abstraction layer so make can remember what it's learned instead of making the same stat calls over and over again. Also, teach make to scan a directory with readdir() instead of populating the cache by making thousands of scattershot stat() calls, and implement a simple heuristic to decide when to use the readdir method.

Unfortunately, in general after running a program the cache has to be flushed because we don't know what the program did. The final step in this project is to make use of kqueue or inotify or similar when running programs to synchronize make's directory cache so it doesn't have to be flushed.

As an additional step one might also have make warn when a recipe touches files it hasn't been declared to touch... but note that while this is desirable it is also somewhat problematic.

Posted terribly early Friday morning, May 27th, 2016 Tags:

When developing device drivers inside the kernel, mistakes will usually cause the whole kernel to crash unrecoverably and require a reboot. But device drivers need not run inside the kernel: with rump, device driver code can be run as a process inside userland.

However, userland code has only limited access to the hardware registers needed to control devices: currently, NetBSD supports only USB device drivers in userland, via ugen(4). NetBSD should additionally support developing PCI drivers in userland with rump -- at least one driver, iwm(4), was developed using rump, but on a Linux host!

There are several milestones to this project:

  1. Implement enough of the bus_space(9) and pci_mapreg() (pci(9)) APIs to map device registers from PCI BARs, using a pci(4) device (/dev/pciN). A first approximation can be done using pci(3) and simply mmapping from mem(4) (/dev/mem), but it would be better to cooperate with the kernel so that the kernel can limit the user to mapping ranges listed in PCI BARs without granting privileges to read all physical pages in the system. Cooperation with the kernel will also be necessary to implement port I/O instead of memory-mapped I/O, without raising the I/O privilege level of the userland process, on x86 systems.

  2. Expose PCI interrupts as events that can be read from a pci(4) (/dev/pciN) device instance, and use that to implement the pci_intr(9) API in userland. For many devices, this may require a small device-specific shim in the kernel to acknowledge interrupts while they are masked -- but that is a small price to pay for rapidly iterating driver development in userland.

  3. Devise a scheme for userland allocate and map memory for DMA in order to implement bus_dma(9). Again, a first approximation can be done by simply wiring pages with mlock(3) and then asking the kernel for a virtual-to-physical translation to program hardware DMA registers. However, this will not work on any machines with an IOMMU, which would help to prevent certain classes of catastrophic memory corruption in the case of a buggy driver. Cooperation with the kernel, and maintaining a kernel-side mirror of each bus_dmamem allocation and each bus_dmamap.

Inspiration may be found in the Linux uio framework. This project is not necessarily PCI-specific -- ideally, most of the code to manage bus_space(9), bus_dma(9), and interrupt event delivery should be generic. The focus is PCI because it is widely used and would be applicable to many drivers and devices for which someone has yet to write drivers.

Posted late Tuesday afternoon, February 23rd, 2016 Tags:

NetBSD configures a timer device to deliver a periodic timer interrupt, usually every 10 ms, in order to count time, wake threads that are sleeping, etc. This made sense when timer devices were expensive to program and CPUs ran at MHz. But today, CPUs run at GHz; timers on modern x86, arm, mips, etc. hardware are cheap to reprogram; programs expect greater than 10 ms resolution for sleep times; and mandatory periodic activity on idle machines wastes power.

There are four main milestones to this project:

  1. Choose a data structure for high-resolution timers, and a way to request high-resolution vs low-resolution sleeps, and adapt the various timeout functions (cv_timedwait, etc.) to use it. The current call wheel data structure for callouts provides good performance, but only for low-resolution sleeps. We need another data structure that provides good performance for high-resolution sleeps without hurting the performance of the existing call wheel for existing applications.

  2. Design a machine-independent high-resolution timer device API, implement it on a couple machines, and develop tests to confirm that it works. This might be done by adapting the struct timecounter interface to arm it for an interrupt, or might be done another way.

  3. Convert all the functions of the periodic 10 ms timer, hardclock, to schedule activity only when needed.

  4. Convert the various software subsystems that rely on periodic timer interrupts every tick, or every second, via callout(9), either to avoid periodic work altogether, or to batch it up only when the machine is about to go idle, in order to reduce the number of wakeups and thereby reduce power consumption.

Posted late Monday evening, February 22nd, 2016 Tags:
  • Contact: tech-kern
  • Duration estimate: 3 months

Programmatic interfaces in the system (at various levels) are currently specified in C: functions are declared with C prototypes, constants with C preprocessor macros, etc. While this is a reasonable choice for a project written in C, and good enough for most general programming, it falls down on some points, especially for published/supported interfaces. The system call interface is the most prominent such interface, and the one where these issues are most visible; however, other more internal interfaces are also candidates.

Declaring these interfaces in some other form, presumably some kind of interface description language intended for the purpose, and then generating the corresponding C declarations from that form, solves some of these problems. The goal of this project is to investigate the costs and benefits of this scheme, and various possible ways to pursue it sanely, and produce a proof-of-concept implementation for a selected interface that solves a real problem.

Note that as of this writing many candidate internal interfaces do not really exist in practice or are not adequately structured enough to support this treatment.

Problems that have been observed include:

  • Using system calls from languages that aren't C. While many compilers and interpreters have runtimes written in C, or C-oriented foreign function interfaces, many do not, and rely on munging system headers in (typically) ad hoc ways to extract necessary declarations. Others cut and paste declarations from system headers and then break silently if anything ever changes.

  • Generating test or validation code. For example, there is some rather bodgy code attached to the vnode interface that allows crosschecking the locking behavior observed at runtime against a specification. This code is currently generated from the existing vnode interface specification in vnode_if.src; however, the specification, the generator, and the output code all leave a good deal to be desired. Other interfaces where this kind of validation might be helpful are not handled at all.

  • Generating marshalling code. The code for (un)marshalling system call arguments within the kernel is all handwritten; this is laborious and error-prone, especially for the various compat modules. A code generator for this would save a lot of work. Note that there is already an interface specification of sorts in syscalls.master; this would need to be extended or converted to a better description language. Similarly, the code used by puffs and refuse to ship file system operations off to userland is largely or totally handwritten and in a better world would be automatically generated.

  • Generating trace or debug code. The code for tracing system calls and decoding the trace results (ktrace and kdump) is partly hand-written and partly generated with some fairly ugly shell scripts. This could be systematized; and also, given a complete specification to work from, a number of things that kdump doesn't capture very well could be improved. (For example, kdump doesn't decode the binary values of flags arguments to most system calls.)

  • Add your own here. (If proposing this project for GSOC, your proposal should be specific about what problem you're trying to solve and how the use of an interface definition language will help solve it. Please consult ahead of time to make sure the problem you're trying to solve is considered interesting/worthwhile.)

The project requires: (1) choose a target interface and problem; (2) evaluate existing IDLs (there are many) for suitability; (3) choose one, as roll your own should be only a last resort; (4) prepare a definition for the target interface in your selected IDL; (5) implement the code generator to produce the corresponding C declarations; (6) integrate the code generator and its output into the system (this should require minimal changes); (7) implement the code generator and other material to solve your target problem.

Note that while points 5, 6, and 7 are a reasonably simple matter of programming, parts 1-3 and possibly 4 constitute research -- these parts are as important as the code is, maybe more so, and doing them well is critical to success.

The IDL chosen should if at all possible be suitable for repeating steps 4-7 for additional choices of target interface and problem, as there are a lot of such choices and many of them are probably worth pursuing in the long run.

Some things to be aware of:

  • syscalls.master contains a partial specification of the functions in the system call interface. (But not the types or constants.)
  • vnode_if.src contains a partial specification of the vnode interface.
  • There is no specification, partial or otherwise, of the other parts of the VFS interface, other than the code and the (frequently outdated) section 9 man pages.
  • There are very few other interfaces within the kernel that are (as of this writing) structured enough or stable enough to permit using an IDL without regretting it. The bus interface might be one.
  • Another possible candidate is to pursue a specification language for handling all the conditional visibility and namespace control material in the userspace header files. This requires a good deal of familiarity with the kinds of requirements imposed by standards and the way these requirements are generally understood and is a both large and extremely detail-oriented.

Note: the purpose of an IDL is not to encode a parsing of what are otherwise C-language declarations. The purpose of an IDL is to encode some information about the semantics of an interface. Parsing the input is the easy part of the problem. For this reason, proposals that merely suggest replacing C declarations with e.g. equivalent XML or JSON blobs are unlikely to be accepted.

(Also note that XML-based IDLs are unlikely to be accepted by the community as XML is not in general considered suitable as a source language.)

Posted terribly early Wednesday morning, February 17th, 2016 Tags:
  • Contact: tech-kern
  • Duration estimate: 2-3 months

The fdiscard system call, and the file-system-level VOP_DISCARD operation backing it, allow dropping ranges of blocks from files to create sparse files with holes. This is a simple generalization of truncate.

Discard is not currently implemented for FFS; implement it. (For regular files.) It should be possible to do so by generalizing the existing truncate code; this should be not super-difficult, but somewhat messy.

Note that it's important (for both the WAPBL and non-WAPBL cases) to preserve the on-disk invariants needed for successful crash recovery.

Posted early Wednesday morning, November 25th, 2015 Tags:
  • Contact: tech-kern
  • Duration estimate: 1-2 months

Implement the block-discard operation (also known as "trim") for RAIDframe.

Higher-level code may call discard to indicate to the storage system that certain blocks no longer contain useful data. The contents of these blocks do not need to be preserved until they are later written again. This means, for example, that they need not be restored during a rebuild operation.

RAIDframe should also be made to call discard on the disk devices underlying it, so those devices can take similar advantage of the information. This is particularly important for SSDs, where discard ("trim") calls can increase both the performance and write lifetime of the device.

The complicating factor is that a block that has been discarded no longer has stable contents: it might afterwards read back as zeros, or it might not, or it might change to reading back zeros (or trash) at any arbitrary future point until written again.

The first step of this project is to figure out a suitable model for the operation of discard in RAIDframe. Does it make sense to discard single blocks, or matching blocks across stripes, or only whole stripe groups, or what? What metadata should be stored to keep track of what's going on, and where does it go? Etc.

Posted early Wednesday morning, November 25th, 2015 Tags:

Implement component 'scrubbing' in RAIDframe.

RAIDframe (raid(4)) provides various RAID levels to NetBSD, but currently has no facilities for 'scrubbing' (checking) the components for read errors.


  • implement a generic scrubbing routine that can be used by all RAID types
  • implement RAID-level-specific code for component scrubbing
  • add an option to raidctl (raidctl(8)) to allow the system to run the scrubbing as required
  • update raidctl (raidctl(8)) documentation to reflect the new scrubbing capabilities, and discuss what scrubbing can and cannot do (benefits and limitations)


  • Allow the user to select whether or not to attempt to 'repair' errors
  • Actually attempt to repair errors
Posted late Thursday afternoon, November 5th, 2015 Tags:


Xen now supports the ARM cpu architecture. We still don't have support in NetBSD for this architecture. A dom0 would be the first start in this direction.


  • toolstack port to arm
  • Fully functioning dom0

This project would be much easier once pvops/pvh is ready, since the Xen ARM implementation only supports PVH.


Posted at midnight, August 19th, 2015 Tags:

Abstract: The blktap2 method provides a userland daemon to provide access to on disk files, arbitrated by the kernel. The NetBSD kernel lacks this support and currently disables blktap in xentools.


  • blktap2 driver for /dev/blktapX
  • Tool support to be enabled in xentools
  • Enable access to various file formats (Qcow, VHD etc) , via the [ tap: ] disk specification.


The driver interacts with some areas of the kernel that are different from linux - eg: inter-domain page mapping, interdomain signalling. This may make it more challenging than a simple driver, especially with test/debug/performance risk.

Note: There's opportunity to compare notes with the FUSE/rump implementations.

See and

Posted at midnight, August 19th, 2015 Tags:


The NetBSD boot path contains conditional compiled code for XEN. In the interests of moving towards unified pvops kernels, the code can be modified to be conditionally executed at runtime.


  • Removal of #ifdef conditional code from all source files.
  • New minor internal API definitions, where required, to encapsulate platform specific conditional execution.
  • Performance comparison figures for before and after.
  • Clear up the API clutter to make way for pvops.

Note: Header files are exempt from #ifdef removal requirement.


The number of changes required are estimated as follows:

drone.sorcerer> pwd
drone.sorcerer> find . -type f -name '*.[cSs]' -exec grep -H "^#if.* *\\bXEN\\b" '{}' \; | wc -l

The effort required in this task is likely to be highly variable, depending on the case. In some cases, minor API design may be required.

Most of the changes are likely to be mechanical.

Posted at midnight, August 19th, 2015 Tags:
  • Contact: port-xen
  • Duration estimate: 32 hours


NetBSD Dom0 kernels currently operate in fully PV mode. On amd64, this has the same performance drawbacks as for PV mode in domU. Instead, PVH mode provides a HVM container over pagetable management, while virtualising everything else. This mode is available on dom0, we attempt to support it.

Note: Xen/Linux is moving to this dom0 pvh support model.

Deliverables: PVH mode dom0 operation.


This project depends on the domU pvops/pvh project. The main work is in configuration and testing related (bringing in native device drivers as well as backend ones).

The bootstrap path may need to be tweaked for dom0 specific things.

Dependencies: This project depends on completion of the domU pvops/pvh project. This project can enable the NetBSD/Xen/ARM dom0 port.


Posted at midnight, August 19th, 2015 Tags:
  • Contact: port-xen
  • Duration estimate: 64 hours

Abstract: Dom0 is not SMP safe right now. The main culprits are the backend drivers for blk and netif (and possibly others, such as pciback)


  • SMP capable dom0


This involves extensive stress testing of the dom0 kernel for concurrency and SMP workloads. Locking in various backend and other drivers need to be reviewed, reworked and tested.

Interrupt paths need to be reviewed, and the interrupt handling code needs to be reworked, potentially. The current event code doesn't multiplex well on vcpus. This needs reinvestigation.

This is a test/debug heavy task, since MP issues can crop up in various unrelated parts of the kernel.

Posted at midnight, August 19th, 2015 Tags:
  • Contact: port-xen
  • Duration estimate: 48 hours

Abstract: This is the first step towards PVH mode. This is relevant only for DomU. Speeds up amd64 - PV on amd64 has sub-optimal TLB and syscall overhead due to privilege level sharing between kernel and userland.


  • operational PV drivers (netif.h, blkif.h) on HVM mode.


  • Xen needs to be detected at boottime.
  • The shared page and hypercall interface need to be setup.
  • xenbus(4) attachment during bootime configuration.

Scope (Timelines):

This project involves intrusive kernel configuration changes, since NetBSD currently has separate native (GENERIC) and xen (XEN3_DOMU) kernels. A "hybrid" build is required to bring in the PV drivers and xen infrastructure support such as xenbus(4).

Once the hypercall page is up however, progress should be faster.

See: and

Posted at midnight, August 19th, 2015 Tags:
  • Contact: port-xen
  • Duration estimate: 48 hours

Abstract: This is the final step towards PVH mode. This is relevant only for DomU. Xen is moving to this eventual dom0 pvh support model. This project depends on completion of pv-on-hvm.


  • operational interrupt (event) handling
  • PV only drivers.


Following on from the pv-on-hvm project, this project removes all remaining dependencies on native drivers. All drivers are PV only. Interrupt setup and handling is via the "event" mechanism.

Scope (Timelines):

This project has some uncertainty based on the change in the interrupt mechanism. Since the interrupt mechanism moves from apic + IDT based, to event based, there's room for debug/testing spillover.

Further, private APIs may need to be developed to parition the pv and native setup and management code for both mechanisms.


Posted at midnight, August 19th, 2015 Tags:
  • Contact: port-xen
  • Duration estimate: 16 hours


Although Xen has ACPI support, the kernel needs to make explicit hypercalls for power management related functions.

sleep/resume are supported, and this support needs to be exported via the dom0 kernel.


  • sleep/resume support for NetBSD/Xen dom0


There are two approaches to this. The first one involves using the kernel to export syscall nodes for ACPI.

The second one involves Xen aware power scripts, which can write directly to the hypercall kernfs node - /kern/privcmd. Device tree suspend could happen via drvctl(8), in theory.

Posted at midnight, August 19th, 2015 Tags:
  • Contact: port-xen
  • Duration estimate: 16 hours

Abstract: The pvfb video driver and the keyboard/point driver are useful for gui interaction with a domU console, including gui.


A framebuffer device and console, which can be used to interact with the domU. The qemu process on the dom0 can then use this buffer to display to the regular X server using the SDL, as it does with HVM domUs. This is very useful for gui frontends to xen.


A simple framebuffer driver is implemented, along the lines of:

Posted at midnight, August 19th, 2015 Tags:
  • Contact: port-xen
  • Duration estimate: 32 hours


Make the dom0 kernel use and provide drm(4) support. This enable gui use of dom0.


Functioning X server using native kernel style drmkms support.


  • mtrr support for Xen
  • high memory RAM extent allocation needs special attention (See: kern/49330)
  • native driver integration and testing.
Posted at midnight, August 19th, 2015 Tags:
  • Contact: port-xen
  • Duration estimate: 64 hours

Abstract: This project enables NetBSD to have physical ram pages dynamically added via uvm_page_physload() and dynamically removed using a complimentary function.

Rationale: Makes NetBSD a viable datacentre OS, with hotspare RAM add/remove support.


  • dynamic physical page add/remove support
  • Xen balloon driver usage of this support.


Further api and infrastructural changes may be required to accommodate these changes. The physical page management code will need to be enhanced to more than a fixed array, while taking into account memory attribute related tagging information.

Posted at midnight, August 19th, 2015 Tags:

Abstract: This project involves allowing SCSI devices on the dom0 to be passed through to the domU. Individual SCSI devices can be passed through dynamically. The domU needs to have a "frontend" driver that can communicate with the backend driver which runs on the dom0 and takes care of arbitrating and sometimes duplicating the requests.


  • Functioning backend and frontend drivers for dom0 and domU respectively.
  • Make available high performance SCSI device to PV and PVHVM domains.


Posted at midnight, August 19th, 2015 Tags:

Abstract: This project involves allowing USB devices on the dom0 to be passed through to the domU. Individual USB devices can be passed through dynamically. The domU needs to have a "frontend" driver that can communicate with the backend driver which runs on the dom0 and takes care of arbitrating and sometimes duplicating the requests.


  • Functioning backend and frontend drivers for dom0 and domU respectively.
  • Make available USB device passthrough to PV and PVHVM domains.


Since the feature is under development in Xen, a frontend implementation is advisable to start to engage with the project. Once we have a functioning frontend on a non-NetBSD dom0, then we can engage with the community to develop the backend.


Posted at midnight, August 19th, 2015 Tags:

Abstract: Current libvirt support in NetBSD doesn't include Xen support.

Rationale: Enables gui based domU managers. Enables the pvfb work to be easily accessible


  • Fully functioning libvirt with Xen "driver" support.

See and

Posted at midnight, August 19th, 2015 Tags:

Abstract: NetBSD/Xen currently doesn't support __HAVE_DIRECT_MAP


  • Direct Mapped functioning NetBSD/Xen, both dom0 and domU
  • Demonstrable performance numbers
  • Reduce TLB contention and clean up code for pvops/pvh

Implementation: This change involves testing for the Xen and CPU featureset, and tweaking the pmap code so that 1G superpages can be requested from the hypervisor and direct mapped the same way native code does.

Posted Tuesday night, August 18th, 2015 Tags:

Add a query optimizer to find(1).

Currently find builds a query plan for its search, and then executes it with little or no optimization. Add an optimizer pass on the plan that makes it run faster.

Things to concentrate on are transforms that allow skipping I/O: not calling stat(2) on files that will not be matched, for example, or not recursing into subdirectories whose contents cannot ever match. Combining successive string matches into a single match pattern might also be a win; so might precompiling match patterns into an executable match form (like with regcomp(3)).

To benefit from many of the possible optimizations it may be necessary to extend the fts(3) interface and/or extend the query plan schema or the plan execution logic. For example, currently it doesn't appear to be possible for an fts(3) client to take advantage of file type information returned by readdir(3) to avoid an otherwise unnecessary call to stat(2).

Step 1 of the project is to choose a number of candidate optimizations, and for each identify the internal changes needed and the expected benefits to be gained.

Step 2 is to implement a selected subset of these based on available time and cost/benefit analysis.

It is preferable to concentrate on opportunities that can be found in find invocations likely to actually be typed by users or issued by programs or infrastructure (e.g. in pkgsrc), vs. theoretical opportunities unlikely to appear in practice.

Posted late Wednesday evening, July 8th, 2015 Tags:

Test and debug the RAID 6 implementation in RAIDframe.

NetBSD uses RAIDframe (raid(4)) and stub code exists for RAID6 but is not very documented or well tested. Obviously, this code needs to be researched and vetted by an interested party. Other BSD projects should be consulted freely.

Milestones: * setup a working RAID 6 using RAIDFrame * document RAID6 in raid(4) manual page * port/develop a set of reliability and performance tests * fix bugs along the way * automate RAID testing in atf

Bonus: * Document how to add new RAID levels to RAIDframe * (you're awesome bonus) add RAID 1+0, etc

Posted Wednesday night, February 18th, 2015 Tags:
  • Contact: tech-kern
  • Duration estimate: 1-2 months

The mlock() system call locks pages in memory; however, it's meant for real-time applications that wish to avoid pageins.

There's a second class of applications that want to lock pages in memory: databases and anything else doing journaling, logging, or ordered writes of any sort want to lock pages in memory to avoid pageouts. That is, it should be possible to lock a (dirty) page in memory so that it does not get written out until after it's unlocked.

It is a bad idea to try to make mlock() serve this purpose as well as the purpose it already serves, so add a new call, perhaps mlockin(), and implement support in UVM.

Then for extra credit ram it through POSIX and make Linux implement it :-)

Posted terribly early Monday morning, February 16th, 2015 Tags:

Add publish-subscribe sockets to AF_UNIX (filesystem sockets).

The idea of a publish-subscribe socket (as opposed to a traditional socket) is that anything written to one end is read out by all processes reading from the other end.

This raises some issues that need attention; most notably, if a process doesn't read data sent to it, how long do we wait (or how much data do we allow to accumulate) before dropping the data or disconnecting that process?

It seems that one would want to be able to have both SOCK_STREAM and SOCK_DGRAM publish/subscribe channels, so it isn't entirely clear yet how best to graft this functionality into the socket API. Or the socket code. It might end up needing to be its own thing instead, or needing to extend the socket API, which would be unfortunate but perhaps unavoidable.

The goal for these sockets is to provide a principled and robust scheme for passing notices around. This will allow replacing multiple existing ad hoc callback schemes (such as for notices of device insertions) and also to allow various signaling schemes among user processes that have never been very workable in Unix. Those who remember AREXX will recognize the potential; but of course it will take time before anything like that can be orchestrated.

For this reason it's important that it be reasonably easy for one of the endpoints to be inside the kernel; but it's also important that peer credentials and permissions work.

Another goal is to be able to use publish/subscribe sockets to provide a compatibility library for Linux desktop software that wants to use dbus or any successor to dbus.

I'm marking this project hard as it's still only about half baked.

Posted terribly early Monday morning, February 16th, 2015 Tags:

Add support to UVM to allow processes to map pages directly from underlying objects that are already mapped into memory: for example, files in tmpfs; files in mfs; and also files on memory-mapped storage devices, like raw flash chips or experimental storage-class memory hardware.

This allows accesses (most notably, execution of user programs) to go directly to the backing object without copying the pages into main system memory. (Which in turn has obvious advantages.)

Note that uebayasi@ was working on this at one point but his changes did not get merged.

Posted terribly early Monday morning, February 16th, 2015 Tags:
  • Contact: tech-kern
  • Duration estimate: 2 months

Add per-user memory usage limits (also known as "quotas") to tmpfs, using the existing quota system infrastructure.

Posted terribly early Monday morning, February 16th, 2015 Tags:
  • Contact: tech-kern
  • Duration estimate: 2 months

While currently we have the cgd(4) driver for encrypting disks, setting it up is fairly involved. Furthermore, while it's fairly easy to use it just for /home, in an ideal world the entire disk should be encrypted; this leads to some nontrivial bootstrapping problems.

Develop a scheme for mounting root on cgd that does not require explicit manual setup, that passes cryptographic muster, and that protects everything on the root volume except for what absolutely must be exposed. Implement it.

The following is a non-exhaustive list of issues to consider:

  • How should we tell when root should be on cgd (perhaps in boot.cfg?)
  • When (and how) do we enter the passphrase needed to mount root (at mount-root time? in the bootloader? after mounting a fake root?)
  • Key management for the encryption passphrase
  • Where to keep the bootloader and/or kernels
  • Should the bootloader be able to read the cgd to get the boot kernel from it?
  • If the kernel is not on cgd, should it be signed to allow the bootloader to verify it?
  • Integration with sysinst so all you need to do to get FDE is to hit a checkbox
  • Perhaps, making it easy or at least reasonably possible to migrate an unencrypted root volume to cgd

Note that while init(8) currently has a scheme for mounting a temporary root and then chrooting to the real root afterwards, it doesn't work all that well. Improving it is somewhat difficult; also, ideally init(8) would be on the encrypted root volume. It would probably be better to support mounting the real root directly on cgd.

Another option is a pivot_root type of operation like Linux has, which allows mounting a fake root first and then shuffling the mount points to move something else into the / position. This has its drawbacks as well, and again ideally there would be no unencrypted fake root volume.

Posted terribly early Monday morning, February 16th, 2015 Tags:

The current asynchronous I/O (aio) implementation works by forking a background thread inside the kernel to do the I/O synchronously. This is a starting point, but one thread limits the amount of potential parallelism, and adding more threads falls down badly when applications want to have large numbers of requests outstanding at once.

Furthermore, the existing code doesn't even work in some cases; this is not well documented but there have been scattered reports of problems that nobody had time to look into in detail.

In order to make asynchronous I/O work well, the I/O path needs to be revised, particularly in the kernel's file system interface, so that all I/O operations are asynchronous requests by default. It is easy for high-level code to wait synchronously for lower-level asynchronous requests to complete; it is much more problematic for an asynchronous request to call into code that expects to be synchronous.

The ?flow control project, which also requires substantial revisions to the I/O path, naturally combines with this project.

Posted in the wee hours of Sunday night, February 16th, 2015 Tags:

NetBSD has preliminary DTrace support, so it supports SDT and FBT provider only.

riz@ has syscall provider patch.

Posted early Saturday morning, February 14th, 2015 Tags:

The Ext4 file system is standard file system for Linux (recent Ubuntu, and somewhat old Fedora). It is the successor of the Ext3 file system.

It has journaling, larger file system volumes, and improved timestamp etc features.

The NetBSD kernel support and accompanying userland code should be written.

It is not clear at the moment if this should be done by working on the existing ext2 code or not, and if not, whether it should be integrated with the ufs code or not.

Posted early Saturday morning, February 14th, 2015 Tags:

Linux provides an ISC-licensed Broadcom SoftMAC driver. Source code is included in Linux kernel tree,

Posted early Saturday morning, February 14th, 2015 Tags:

Xilinx MicroBlaze is RISC processort for Xilinx's FPGA chip. MicroBlaze can have MMU, and NetBSD can support it.

Posted early Saturday morning, February 14th, 2015 Tags:

Google Chrome is widely used nowadays, and it has some state-of-the-art functionalities.

Chromium web browser is open source edition of Google Chrome.

Currently Chromium is present in pkgsrc-wip as wip/chromium. Please see wip/chromium/TODO for a TODO list about it.

Posted terribly early Saturday morning, February 14th, 2015 Tags:

If one embarks on a set of package updates and it doesn't work out too well, it is nice to be able to roll back to a previous state. This entails two things: first, reverting the set of installed packages to what it was at some chosen prior time, and second, reverting changes to configuration, saved application state, and other such material that might be needed to restore to a fully working setup.

This project is about the first part, wihch is relatively tractable. The second part is Hard(TM), but it's pointless to even speculate about it until we can handle the first part, which we currently can't. Also, in many cases the first part is perfectly sufficient to recover from a problem.

Ultimately the goal would be to be able to do something like pkgin --revert yesterday but there are a number of intermediate steps to that to provide the infrastructure; after that adding the support to pkgin (or whatever else) should be straightforward.

The first thing we need is a machine-readable log somewhere of package installs and deinstalls. Any time a package is installed or removed, pkg_install adds a record to this log. It also has to be possible to trim the log so it doesn't grow without bound -- it might be interesting to keep the full history of all package manipulations over ten years, but for many people that's just a waste of disk space. Adding code to pkg_install to do this should be straightforward; the chief issues are

  • choosing the format
  • deciding where to keep the data

both of which require some measure of community consensus.

Preferably, the file should be text-based so it can be manipulated by hand if needed. If not, there ought to be some way to recover it if it gets corrupted. Either way the file format needs to be versioned and extensible to allow for future changes.

The file format should probably have a way to enter snapshots of the package state in addition to change records; otherwise computing the state at a particular point requires scanning the entire file. Note that this is very similar to deltas in version control systems and there's a fair amount of prior art.

Note that we can already almost do this using /var/backups/work/pkgs.current,v; but it only updates once a day and the format has assorted drawbacks for automated use.

The next thing needed is a tool (maybe part of pkg_info, maybe not) to read this log and both (a) report the installed package state as of a particular point in time, and (b) print the differences between then and now, or between then and some other point in time.

Given these two things it becomes possible to manually revert your installed packages to an older state by replacing newer packages with older packages. There are then two further things to do:

Arrange a mechanism to keep the .tgz files for old packages on file.

With packages one builds oneself, this can already be done by having them accumulate in /usr/pkgsrc/packages; however, that has certain disadvantages, most notably that old packages have to be pruned by hand. Also, for downloaded binary packages no comparable scheme exists yet.

Provide a way to compute the set of packages to alter, install, or remove to switch to a different state. This is somewhat different from, but very similar to, the update computations that tools like pkgin and pkg_rolling-replace do, so it probably makes sense to implement this in one or more of those tools rather than in pkg_install; but perhaps not.

There are some remaining issues, some of which aren't easily solved without strengthening other things in pkgsrc. The most notable one is: what about package options? If I rebuild a package with different options, it's still the "same" package (same name, same version) and even if I record the options in the log, there's no good way to distinguish the before and after binary packages.

Posted terribly early Saturday morning, August 30th, 2014 Tags:

Some of the games in base would be much more playable with even a simple graphic interface. The card games (e.g. canfield, cribbage) are particularly pointed examples; but quite a few others, e.g. atc, gomoku, hunt, and sail if anyone ever fixes its backend, would benefit as well.

There are two parts to this project: the first and basically mostly easy part is to pick a game and write an alternate user interface for it, using SDL or gtk2 or tk or whatever seems appropriate.

The hard part is to arrange the system-level infrastructure so that the new user interface appears as a plugin for the game that can be installed from pkgsrc but that gets found and run if appropriate when the game is invoked from /usr/games. The infrastructure should be sufficiently general that lots of programs in base can have plugins.

Some things this entails:

  • possibly setting up a support library in the base system for program plugins, if it appears warranted;

  • setting up infrastructure in the build system for programs with plugins, if needed;

  • preferably also setting up infrastructure in the build system for building plugins;

  • choosing a place to put the header files needed to build external plugins;

  • choosing a place to put plugin libraries, too, as there isn't a common model out there yet;

  • establishing a canonical way for programs in base to find things in pkgsrc, which is not technically difficult but will require a lot of wrangling to reach a community consensus;

  • setting up any pkgsrc infrastructure needed to build plugin packages for base (this should take little or no work).

It is possible that plugins warrant a toplevel directory under each prefix rather than being stuffed in lib/ directories; e.g. /usr/plugins, /usr/pkg/plugins, /usr/local/plugins, so the pkgsrc-installed plugins for e.g. rogue would go by default in /usr/pkg/plugins/rogue.

Note that while this project is disguised as a frivolous project on games, the infrastructure created will certainly be wanted in the medium to long term for other more meaty things. Doing it on games first is a way to get it done without depending on or conflicting with much of anything else.

Posted Wednesday afternoon, April 2nd, 2014 Tags:

In MacOS X, while logged in on the desktop, you can switch to another user (leaving yourself logged in) via a system menu.

The best approximation to this we have right now is to hit ctrl-alt-Fn, switch to a currently unused console, log in, and run startx with an alternate screen number. This has a number of shortcomings, both from the point of view of general polish (logging in on a console and running startx is very untidy, and you have to know how) and of technical underpinnings (starting multiple X servers uses buckets of memory, may cause driver or drmgem issues, acceleration may not work except in the first X server, etc.)

Ideally we'd have a better scheme for this. We don't necessarily need something as slick as OS X provides, and we probably don't care about Apple's compositing effects when switching, but it's useful to be able to switch users as a way of managing least privilege and it would be nice to make it easy to do.

The nature of X servers makes this difficult; for example, it isn't in any way safe to use the same X server for more than one user. It also isn't safe to connect a secure process to a user's X server to display things.

It seems that the way this would have to work is akin to job control: you have a switching supervisor process, which is akin to a shell, that runs as root in order to do authentication and switches (or starts) X servers for one or more users. The X servers would all be attached, I guess, to the same graphics backend device (wsfb, maybe? wsdrmfb?) and be switched in and out of the foreground in much the way console processes get switched in and out of the foreground on a tty.

You have to be able to get back to the switching supervisor from whatever user and X server you're currently running in. This is akin to ^Z to get back to your shell in job control. However, unlike in job control there are security concerns: the key combination has to be something that a malicious application, or even a malicious X server, can't intercept. This is the "secure attention key". Currently even the ctrl-alt-Fn sequences are handled by the X server; supporting this will take quite a bit of hacking.

Note that the secure attention key will also be wanted for other desktop things: any scheme that provides desktop-level access to system configuration needs to be able to authenticate a user as root. The only safe way to do this is to notify the switching supervisor and then have the user invoke the secure attention key; then the user authenticates to the switching supervisor, and that in turn provides some secured channel back to the application. This avoids a bunch of undesirable security plumbing as is currently found in (I think) GNOME; it also avoids the habit GNOME has of popping up unauthenticated security dialogs asking for the root password.

Note that while the switching supervisor could adequately run in text mode, making a nice-looking graphical one won't be difficult. Also that would allow the owner of the machine to configure the appearance (magic words, a custom image, etc.) to make it harder for an attacker to counterfeit the thing.

It is probably not even possible to think about starting this project until DRM/GEM/KMS stuff is more fully deployed, as the entire scheme presupposes being able to switch between X servers without the X servers' help.

Posted Wednesday afternoon, April 2nd, 2014 Tags:

Support for "real" desktops on NetBSD is somewhat limited and also scattershot. Part of the reason for this is that most of the desktops are not especially well engineered, and they contain poorly designed infrastructure components that interact incestuously with system components that are specific to and/or make sense only on Linux. Providing the interfaces these infrastructure components wish to talk to is a large job, and Linux churn being what it is, such interfaces are usually moving targets anyway. Reimplementing them for NetBSD is also a large job and often presupposes things that aren't true or would require that NetBSD adopt system-level design decisions that we don't want.

Rather than chase after Linux compatibility, we are better off designing the system layer and infrastructure components ourselves. It is easier to place poorly conceived interfaces on top of a solid design than to reimplement a poorly conceived design. Furthermore, if we can manage to do a few things right we should be able to get at least some uptake for them.

The purpose of this project, per se, is not to implement any particular piece of infrastructure. It is to identify pieces of infrastructure that the desktop software stacks need from the operating system, or provide themselves but ought to get from the operating system, figure out solid designs that are compatible with Unix and traditional Unix values, and generate project descriptions for them in order to get them written and deployed.

For example, GNOME and KDE rely heavily on dbus for sending notifications around. While dbus itself is a more or less terrible piece of software that completely fails to match the traditional Unix model (rather than doing one thing well, it does several things at once, pretty much all badly) it is used by GNOME and KDE because there are things a desktop needs to do that require sending messages around. We need a transport for such messages; not a reimplementation of dbus, but a solid and well-conceived way of sending notices around among running processes.

Note that one of the other things dbus does is start services up; we already have inetd for that, but it's possible that inetd could use some strengthening in order to be suitable for the purpose. It will probably also need to be taught about whatever message scheme we come up with. And we might want to e.g. set up a method for starting per-user inetd at login time.

(A third thing dbus does is serve as an RPC library. For this we already have XDR, but XDR sucks; it may be that in order to support existing applications we want a new RPC library that's more or less compatible with the RPC library parts of dbus. Or that may not be feasible.)

This is, however, just one of the more obvious items that arises.

Your goal working on this project is to find more. Please edit this page and make a list, and add some discussion/exposition of each issue and possible solutions. Ones that become fleshed out enough can be moved to their own project pages.

Posted Wednesday afternoon, April 2nd, 2014 Tags:

Port or implement NFSv4.

As of this writing the FreeBSD NFS code, including NFSv4, has been imported into the tree but no work has yet been done on it. The intended plan is to port it and eventually replace the existing NFS code, which nobody is particularly attached to.

It is unlikely that writing new NFSv4 code is a good idea, but circumstances might change.

Posted terribly early Thursday morning, February 27th, 2014 Tags:
  • Contact: tech-kern
  • Duration estimate: 3-6 months

Port Dragonfly's Hammer file system to NetBSD.

Note that because Dragonfly has made substantial changes (different from NetBSD's substantial changes) to the 4.4BSD vfs layer this port may be more difficult than it looks up front.

Posted terribly early Thursday morning, February 27th, 2014 Tags:

An important part of a binary-only environment is to support something similar to the existing options framework, but produce different combinations of pkgs.

There is an undefined amount of work required to accomplish this, and also a set of standards and naming conventions to be applied for the output of these pkgs.

This project should also save on having to rebuild "base" components of software to support different options: see amanda-base for an example.

OpenBSD supports this now and should be leveraged heavily for borrowing.

Preliminary work were done during Google Summer of Code 2016 as part of Split debug symbols for pkgsrc builds project. However, the support is far from being complete. In particular to complete the project the following tasks need to be accomplished:

  • Complete the SUBPACKAGES support similar to what was done during the GSoC (with SUBPACKAGES and !SUBPACKAGES case code duplication)
  • When the SUBPACKAGES support is complete we can switch to an implicit (and hidden) subpackage as suggested by ?joerg in order to get rid of code duplication and having a single control flow relative to SUBPACKAGES. In other words: every package will always have at least one subpackage.
  • Adapt mk/ to SUBPACKAGES and other possible other candidates like tex-*-doc, etc. After doing that look at less trivial possible SUBPACKAGES candidate like databases/postegresql*-*.
Posted Thursday evening, October 3rd, 2013 Tags:
  • Contact: tech-kern
  • Duration estimate: 3 months

Lua is obviously a great choice for an embedded scripting language in both kernel space and user space.

However, there exists a issue where Lua needs bindings to interface with NetBSD for it to be useful.

Someone needs to take on the grunt-work task of simply creating Lua interfaces into every possible NetBSD component.

That same someone needs to produce meaningful documentation.

As this enhances Lua's usefulness, more lua bindings will then be created, etc.

This project's goal is to set that inertia by attempting to define and create a solid baseline of Lua bindings.

Here is a list of things to possibly target

  • filesystem hooks
  • sysctl hooks
  • proplib
  • envsys
  • MI system bus code


Posted late Sunday afternoon, June 2nd, 2013 Tags:

Right now kqueue, the kernel event mechanism, only attaches to individual files. This works great for sockets and the like but doesn't help for a directory full of files.

The end result should be feature parity with linux's inotify (a single dir worth of notifications, not necessarily sub-dirs) and the design must be good enough to be accepted by other users of kqueue: osx, freebsd, etc

For an overview of inotify start on wikipedia.
Another example API is the windows event api.

I believe most of the work will be around genfs, namei, and vfs.

Posted at lunch time on Monday, April 15th, 2013 Tags:

pkgin is aimed at being an apt-/yum-like tool for managing pkgsrc binary packages. It relies on pkg_summary(5) for installation, removal and upgrade of packages and associated dependencies, using a remote repository.

While pkgin is now widely used and seems to handle packages installation, upgrade and removal correctly, there's room for improvement. In particular:

Main quest

  • Support for multi-repository
  • Speed-up installed packages database calculation
  • Speed-up local vs remote packages matching
  • Implement an automated-test system
  • Handle conflicting situations (MySQL conflicting with MySQL...)
  • Better logging for installed / removed / warnings / errors

To be confirmed / discussed:

  • Make pkgin independent from pkg_install binaries, use pkg_install libraries or abstract them

Bonus I

In order to ease pkgin's integration with third party tools, it would be useful to split it into a library (libpkgin) providing all methods needed to manipulate packages, i.e., all pkgin's runtime capabilities and flags.

Bonus II (operation "burn the troll")

It would be a very nice addition to abstract SQLite backend so pkgin could be plugged to any database system. A proof of concept using bdb or cdb would fulfill the task.

Useful steps:

  • Understand pkgsrc
  • Understand pkg_summary(5)'s logic
  • Understand SQLite integration
  • Understand pkgin's `impact.c' logic
Posted early Tuesday morning, April 9th, 2013 Tags:
  • Contact: tech-pkg
  • Duration estimate: 1 month

Mancoosi is a project to measure distribution quality by the number of conflicts in a software distribution environment.

The main work of this project would be to analyse possible incompatibilities of pkgsrc and mancoosi (i.e., incompatibilities between Debian and pkgsrc), to write a converter from pkg_summary(5) to CUDF (the format used by Mancoosi). You will need OCaml for doing so, as the libraries used by Mancoosi are written in OCaml.

When you are a French student in the third year of your Bachelor, or first year of your Master, you could also do this project as an internship for your studies.

Posted at lunch time on Wednesday, March 27th, 2013 Tags:

pygrub, which is part of the xentools package, allows booting a Linux domU without having to keep a copy of the kernel and ramdisk in the dom0 file system.

What it does is extract the grub.conf from the domU disk, processes it, and presents a grub like menu. It then extracts the appropriate kernel and ramdisk image from the domU disk, and stores temporary copies of them. Finally, it creates a file containing:

  • kernel=\
  • ramdisk=\
  • extras=\

This file gets processed by xend as if it were part of the domU config.

Most of the code required to do the NetBSD equivalent should be available as part of /boot or the libraries it depends on. The path to the source code for /boot is: sys/arch/i386/stand/boot.

Posted at midnight, December 27th, 2012 Tags:

recent rpmbuild for redhat creates a second rpm for later install containing debug symbols.

This is a very convenient system for having symbols available for debugging but not clogging up the system until crashes happen and debugging is required.

The end result of this project should be a mk.conf flag or build target (debug-build) which would result in a set of symbol files gdb could use, and which could be distributed with the pkg.

Most parts of this project were done during Google Summer of Code 2016 as part of Split debug symbols for pkgsrc builds project. However, in order to be properly integrated in pkgsrc it will need to support multipkg, some little adjustements and more testing in the wild.

Posted at lunch time on Sunday, December 2nd, 2012 Tags:
  • Contact: tech-pkg
  • Mentors: unknown
  • Duration estimate: unknown

Currently, bulk build results are sent to the pkgsrc-bulk mailing list. To figure out if a package is or has been building successfully, or when it broke, one must wade through the list archives and search the report e-mails by hand. Furthermore, to figure out what commits if any were correlated with a breakage, one must wade through the pkgsrc-changes archive and cross-reference manually.

The project is to produce a web/database application that can be run from the pkgsrc releng website on that tracks bulk build successes and failures and provides search and crossreferencing facilities.

The application should subscribe to the pkgsrc-bulk and pkgsrc-changes mailing lists and ingest the data it finds into a SQL database. It should track commits to each package (and possibly infrastructure changes that might affect all packages) on both HEAD and the current stable branch, and also all successful and failed build reports on a per-platform (OS and/or machine type) basis.

The web part of the application should be able to retrieve summaries of currently broken packages, in general or for a specific platform and/or specific branch. It should also be able to generate a more detailed report about a single package, containing for example which platforms it has been built on recently and whether it succeeded or not; also, if it is broken, how long it has been broken, and the history of package commits and version bumps vs. build results. There will likely be other searches/reports wanted as well.

The application should also have an interface for people who do partial or individual-package check builds; that is, it should be able to generate a list of packages that have not been built since they were last committed, on a given platform or possibly on a per-user basis, and accept results from attempting to build these or subsets of these packages. It is not entirely clear what this interface should be (and e.g. whether it should be command-line-based, web-based, or what, and whether it should be limited to developers) and it's reasonable to expect that some refinements or rearrangements to it will be needed after the initial deployment.

The application should also be able to record cross-references to the bug database. To begin with at least it's reasonable for this to be handled manually.

This project should be a routine web/database application; there is nothing particularly unusual about it from that standpoint. The part that becomes somewhat less trivial is making all the data flows work: for example, it is probably necessary to coordinate an improvement in the way bulk build results are tagged by platform. It is also necessary to avoid importing the reports that appear occasionally on pkgsrc-bulk from misconfigured pbulk installs.

Note also that "OS" and "machine type" are not the only variables that can affect build outcome. There are also multiple compilers on some platforms, for which the results should be tracked separately, plus other factors such as non-default installation paths. Part of the planning phase for this project should be to identify all the variables of this type that should be tracked.

Also remember that what constitutes a "package" is somewhat slippery as well. The pkgsrc directory for a package is not a unique key; multiversion packages, such as Python and Ruby extensions, generate multiple results from a single package directory. There are also a few packages where for whatever reason the package name does not match the pkgsrc directory. The best key seems to be the pkgsrc directory paired with the package-name-without-version.

Some code already exists for this, written in Python using Postgres. Writing new code (rather than working on this existing code) is probably not a good plan.

Posted late Sunday night, April 9th, 2012 Tags:

Font handling in Unix has long been a disgrace. Every program that needs to deal with fonts has had to do its own thing, and there has never been any system-level support, not even to the point of having a standard place to install fonts in the system. (While there was/is a place to put X11 fonts, X is only one of the many programs doing their own thing.)

Font management should be a system service. There cannot be one type of font file -- it is far too late for that -- but there should be one standardized place for fonts, one unified naming scheme for fonts and styles, and one standard way to look up and open/retrieve/use fonts. (Note: "one standardized place" means relative to an installation prefix, e.g. share/fonts.)

Nowadays fontconfig is capable of providing much of this.

The project is:

  1. Figure out for certain if fontconfig is the right solution. Note that even if the code isn't, the model may be. If fontconfig is not the right solution, figure out what the right solution is. Also ascertain whether existing applications that use the fontconfig API can be supported easily/directly or if some kind of possibly messy wrapper layer is needed and doing things right requires changing applications. Convince the developer community that your conclusions are correct so that you can go on to step 2.

1a. Part of this is identifying exactly what is involved in "managing" and "handling" fonts and what applications require.

  1. Implement the infrastructure. If fontconfig is the right solution, this entails moving it from the X sources to the base sources. Also, some of the functionality/configurability of fontconfig is probably not needed in a canonicalized environment. All of this should be disabled if possible. If fontconfig is not the right solution, implement something else in base.

  2. Integrate the new solution in base. Things in base that use fonts should use fontconfig or the replacement for fontconfig. This includes console font handling, groff, mandoc, and X11. Also, the existing fonts that are currently available only to subsets of things in base should be made available to all software through fontconfig or its replacement.

3a. It would also be useful to kill off the old xfontsel and replace it with something that interoperates fully with the new system and also has a halfway decent user interface.

  1. Deploy support in pkgsrc. If the new system does not itself provide the fontconfig API, provide support for that via a wrapper package. Teach applications in pkgsrc that use fontconfig to recognize and use the new system. (This should be fairly straightforward and not require patching anything.) Make pkgsrc font installation interoperate with the new system. (This should be fairly straightforward too.) Take an inventory of applications that use fonts but don't use fontconfig. Patch one or two of these to use the new system to show that it can be done easily. If the new system is not fontconfig and has its own better API, take an inventory of applications that would benefit from being patched to use the new API. Patch one or two of these to demonstrate that it can be done easily. (If the answers from step 1 are correct, it should be fairly easy for most ordinary applications.)

  2. Persuade the rest of the world that we've done this right and try to get them to adopt the solution. This is particularly important if the solution is not fontconfig. Also, if the solution is not fontconfig, provide a (possibly limited) version of the implementation as a pkgsrc package that can be used on other platforms by packages and applications that support it.

Note that step 1 is the hard part. This project requires a good bit of experience and familiarity with Unix and operating system design to allow coming up with a good plan. If you think it's obvious that fontconfig is the right answer and/or you can't see why there might be any reason to change anything about fontconfig, then this may not be the right project for you.

Because of this, this project is really not suitable for GSoC.

Posted at midnight, March 26th, 2012 Tags:

Dependency handling in pkgsrc is rather complex task. There exist some cases (TeX packages, Perl packages) where it is hard to find build dependencies precisely and the whole thing is handled conservatively. E.g. the whole TeXLive meta-package is declared a build dependency even when rather small fraction of it is used actually. Another case is stale heavy dependency which is no longer required but still listed as prerequisite.

It would be nice to have a tool (or a set of them, if necessary) to detect which installed packages, libraries or tools were actually used to build new package. Ideally, the tool should report files used during configure, build, and test stages, and packages these files are provided by.

Milestones: * find or develop a good dependency graph algorithm * implement and demonstrate your new system in pkgsrc by adding a make target * expose this algorithm for use by websites such as

Posted late Saturday evening, March 17th, 2012 Tags:

There exist two ways of launching processes: one is forking them with fork or vfork and replacing the clone with exec-family function, another is spawning process directly with procedures like posix_spawn. Not all platforms implement fork model, and spawn model has its own merits.

pkgsrc relies heavily on launching subprocesses when building software. NetBSD posix_spawn support was implemented in GSoC 2011, it is included in NetBSD 6.0. Now that NetBSD supports both ways, it would be nice to compare the efficiency of both ways of launching subprocesses and measure its effect when using pkgsrc (both in user and developer mode). In order to accomplish that, the following tools should support posix_spawn:

  • devel/bmake
  • shells/pdksh
  • NetBSD base make
  • NetBSD sh
  • NetBSD base ksh
  • potentially some other tools (e.g. lang/nawk, shells/bash, lang/perl5)

Optionally, MinGW spawn support can be added as well.


  • support starting processes and subprocesses by posix_spawn in devel/bmake
  • support starting processes and subprocesses by posix_spawn in shells/pdksh,
  • measure its efficiency and compare it to traditional fork+exec.
Posted late Saturday evening, March 17th, 2012 Tags:

This project proposal is a subtask of smp networking.

The goal of this project is to implement lockless and atomic FIFO/LIFO queues in the kernel. The routines to be implemented allow for commonly typed items to be locklessly inserted at either the head or tail of a queue for either last-in, first-out (LIFO) or first-in, first-out (FIFO) behavior, respectively. However, a queue is not instrinsicly LIFO or FIFO. Its behavior is determined solely by which method each item was pushed onto the queue.

It is only possible for an item to removed from the head of queue. This removal is also performed in a lockless manner.

All items in the queue must share a atomic_queue_link_t member at the same offset from the beginning of item. This offset is passed to atomic_qinit.

The proposed interface looks like this:

  • void atomic_qinit(atomic_queue_t *q, size_t offset);

    Initializes the atomic_queue_t queue at q. offset is the offset to the atomic_queue_link_t inside the data structure where the pointer to the next item in this queue will be placed. It should be obtained using offsetof.

  • void *atomic_qpeek(atomic_queue_t *q);

    Returns a pointer to the item at the head of the supplied queue q. If there was no item because the queue was empty, NULL is returned. No item is removed from the queue. Given this is an unlocked operation, it should only be used as a hint as whether the queue is empty or not.

  • void *atomic_qpop(atomic_queue_t *q);

    Removes the item (if present) at the head of the supplied queue q and returns a pointer to it. If there was no item to remove because the queue was empty, NULL is returned. Because this routine uses atomic Compare-And-Store operations, the returned item should stay accessible for some indeterminate time so that other interrupted or concurrent callers to this function with this q can continue to deference it without trapping.

  • void atomic_qpush_fifo(atomic_queue_t *q, void *item);

    Places item at the tail of the atomic_queue_t queue at q.

  • void atomic_qpush_lifo(atomic_queue_t *q, void *item);

    Places item at the head of the atomic_queue_t queue at q.

Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

This project proposal is a subtask of smp networking.

The goal of this project is to implement lockless, atomic and generic Radix and Patricia trees. BSD systems have always used a radix tree for their routing tables. However, the radix tree implementation is showing its age. Its lack of flexibility (it is only suitable for use in a routing table) and overhead of use (requires memory allocation/deallocation for insertions and removals) make replacing it with something better tuned to today's processors a necessity.

Since a radix tree branches on bit differences, finding these bit differences efficiently is crucial to the speed of tree operations. This is most quickly done by XORing the key and the tree node's value together and then counting the number of leading zeroes in the result of the XOR. Many processors today (ARM, PowerPC) have instructions that can count the number of leading zeroes in a 32 bit word (and even a 64 bit word). Even those that do not can use a simple constant time routine to count them:

clz(unsigned int bits)
    int zeroes = 0;
    if (bits == 0)
        return 32;
    if (bits & 0xffff0000) bits &= 0xffff0000; else zeroes += 16;
    if (bits & 0xff00ff00) bits &= 0xff00ff00; else zeroes += 8;
    if (bits & 0xf0f0f0f0) bits &= 0xf0f0f0f0; else zeroes += 4;
    if (bits & 0xcccccccc) bits &= 0xcccccccc; else zeroes += 2;
    if (bits & 0xaaaaaaaa) bits &= 0xaaaaaaaa; else zeroes += 1;
    return zeroes;

The existing BSD radix tree implementation does not use this method but instead uses a far more expensive method of comparision. Adapting the existing implementation to do the above is actually more expensive than writing a new implementation.

The primary requirements for the new radix tree are:

  • Be self-contained. It cannot require additional memory other than what is used in its data structures.

  • Be generic. A radix tree has uses outside networking.

To make the radix tree flexible, all knowledge of how keys are represented has to be encapsulated into a pt_tree_ops_t structure with these functions:

  • bool ptto_matchnode(const void *foo, const void *bar, pt_bitoff_t max_bitoff, pt_bitoff_t *bitoffp, pt_slot_t *slotp);

    Returns true if both foo and bar objects have the identical string of bits starting at *bitoffp and ending before max_bitoff. In addition to returning true, *bitoffp should be set to the smaller of max_bitoff or the length, in bits, of the compared bit strings. Any bits before *bitoffp are to be ignored. If the string of bits are not identical, *bitoffp is set to the where the bit difference occured, *slotp is the value of that bit in foo, and false is returned. The foo and bar (if not NULL) arguments are pointers to a key member inside a tree object. If bar is NULL, then assume it points to a key consisting of entirely of zero bits.

  • bool ptto_matchkey(const void *key, const void *node_key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);

    Returns true if both key and node_key objects have identical strings of bitlen bits starting at bitoff. The key argument is the same key argument supplied to ptree_find_filtered_node.

  • pt_slot_t ptto_testnode(const void *node_key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);

    Returns bitlen bits starting at bitoff from node_key. The node_key argument is a pointer to the key members inside a tree object.

  • pt_slot_t ptto_testkey(const void *key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);

    Returns bitlen bits starting at bitoff from key. The key argument is the same key argument supplied to ptree_find_filtered_node.

All bit offsets are relative to the most significant bit of the key,

The ptree programming interface should contains these routines:

  • void ptree_init(pt_tree_t *pt, const pt_tree_ops_t *ops, size_t ptnode_offset, size_t key_offset);

    Initializes a ptree. If pt points at an existing ptree, all knowledge of that ptree is lost. The pt argument is a pointer to the pt_tree_t to be initialized. The ops argument is a pointer to the pt_tree_ops_t used by the ptree. This has four members: The ptnode_offset argument contains the offset from the beginning of an item to its pt_node_t member. The key_offset argument contains the offset from the beginning of an item to its key data. This is used if 0 is used, a pointer to the beginning of the item will be generated.

  • void *ptree_find_filtered_node(pt_tree_t *pt, const void *key, pt_filter_t filter, void *filter_ctx);

    The filter argument is either NULL or a function bool (*)(const void *, void *, int);

  • bool ptree_insert_mask_node(pt_tree_t *pt, void *item, pt_bitlen_t masklen);

  • bool ptree_insert_node(pt_tree_t *pt, void *item);

  • void *ptree_iterate(pt_tree_t *pt, const void *node, pt_direction_t direction);

  • void ptree_remove_node(pt_tree_t *pt, const pt_tree_ops_t *ops, void *item);

Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

This project proposal is a subtask of smp networking.

The goal of this project is to enhance the networking protocols to process incoming packets more efficiently. The basic idea is the following: when a packet is received and it is destined for a socket, simply place the packet in the socket's receive PCQ (see atomic pcq) and wake the blocking socket. Then, the protocol is able to process the next packet.

The typical packet flow from ip_input is to {rip,tcp,udp}_input which:

  • Does the lookup to locate the socket which takes a reader lock on the appropriate pcbtable's hash bucket.
  • If found and in the proper state:
    • Do not lock the socket since that would might block and therefore stop packet demultiplexing.
    • pcq_put the packet to the pcb's pcq.
    • kcont_schedule the worker continuation with small delay (~100ms). See kernel continuations.
    • Lock the socket's cvmutex.
    • Release the pcbtable lock.
    • If TCP and in sequence, then if we need to send an immediate ACK:
      • Try to lock the socket.
      • If successful, send an ACK.
    • Set a flag to process the PCQ.
    • cv_signal the socket's cv.
    • Release the cvmutex.
  • If not found or not in the proper state:
    • Release the pcb hash table lock.
Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

This project proposal is a subtask of smp networking.

The goal of this project is to implement interrupt handling at the granularity of a networking interface. When a network device gets an interrupt, it could call <iftype>_defer(ifp) to schedule a kernel continuation (see kernel continuations) for that interface which could then invoke <iftype>_poll. Whether the interrupted source should be masked depends on if the device is a DMA device or a PIO device. This routine should then call (*ifp->if_poll)(ifp) to deal with the interrupt's servicing.

During servicing, any received packets should be passed up via (*ifp->if_input)(ifp, m) which would be responsible for ALTQ or any other optional processing as well as protocol dispatch. Protocol dispatch in <iftype>_input decodes the datalink headers, if needed, via a table lookup and call the matching protocol's pr_input to process the packet. As such, interrupt queues (e.g. ipintrq) would no longer be needed. Any transmitted packets can be processed as can MII events. Either true or false should be returned by if_poll depending on whether another invokation of <iftype>_poll for this interface should be immediately scheduled or not, respectively.

Memory allocation has to be prohibited in the interrupt routines. The device's if_poll routine should pre-allocate enough mbufs to do any required buffering. For devices doing DMA, the buffers are placed into receive descripors to be filled via DMA.

For devices doing PIO, pre-allocated mbufs are enqueued onto the softc of the device so when the interrupt routine needs one it simply dequeues one, fills in it in, and then enqueues it onto a completed queue, finally calls <iftype>_defer. If the number of pre-allocated mbufs drops below a threshold, the driver may decide to increase the number of mbufs that if_poll pre-allocates. If there are no mbufs left to receive the packet, the packets is dropped and the number of mbufs for if_poll to pre-allocate should be increased.

When interrupts are unmasked depends on a few things. If the device is interrupting "too often", it might make sense for the device's interrupts to remain masked and just schedule the device's continuation for the next clock tick. This assumes the system has a high enough value set for HZ.

Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

This project proposal is a subtask of smp networking.

The goal of this project is to implement continuations at the kernel level. Most of the pieces are already available in the kernel, so this can be reworded as: combine callouts, softints, and workqueues into a single framework. Continuations are meant to be cheap; very cheap.

These continuations are a dispatching system for making callbacks at scheduled times or in different thread/interrupt contexts. They aren't "continuations" in the usual sense such as you might find in Scheme code.

Please note that the main goal of this project is to simplify the implementation of SMP networking, so care must be taken in the design of the interface to support all the features required for this other project.

The proposed interface looks like the following. This interface is mostly derived from the callout(9) API and is a superset of the softint(9) API. The most significant change is that workqueue items are not tied to a specific kernel thread.

  • kcont_t *kcont_create(kcont_wq_t *wq, kmutex_t *lock, void (*func)(void *, kcont_t *), void *arg, int flags);

    A wq must be supplied. It may be one returned by kcont_workqueue_acquire or a predefined workqueue such as (sorted from highest priority to lowest):

    • wq_softserial, wq_softnet, wq_softbio, wq_softclock
    • wq_prihigh, wq_primedhigh, wq_primedlow, wq_prilow

    lock, if non-NULL, should be locked before calling func(arg) and released afterwards. However, if the lock is released and/or destroyed before the called function returns, then, before returning, kcont_set_mutex must be called with either a new mutex to be released or NULL. If acquiring lock would block, other pending kernel continuations which depend on other locks may be dispatched in the meantime. However, all continuations sharing the same set of { wq, lock, [ci] } need to be processed in the order they were scheduled.

    flags must be 0. This field is just provided for extensibility.

  • int kcont_schedule(kcont_t *kc, struct cpu_info *ci, int nticks);

    If the continuation is marked as INVOKING, an error of EBUSY should be returned. If nticks is 0, the continuation is marked as INVOKING while EXPIRED and PENDING are cleared, and the continuation is scheduled to be invoked without delay. Otherwise, the continuation is marked as PENDING while EXPIRED status is cleared, and the timer reset to nticks. Once the timer expires, the continuation is marked as EXPIRED and INVOKING, and the PENDING status is cleared. If ci is non-NULL, the continuation is invoked on the specified CPU if the continuations's workqueue has per-cpu queues. If that workqueue does not provide per-cpu queues, an error of ENOENT is returned. Otherwise when ci is NULL, the continuation is invoked on either the current CPU or the next available CPU depending on whether the continuation's workqueue has per-cpu queues or not, respectively.

  • void kcont_destroy(kcont_t *kc);

  • kmutex_t *kcont_getmutex(kcont_t *kc);

    Returns the lock currently associated with the continuation kc.

  • void kcont_setarg(kcont_t *kc, void *arg);

    Updates arg in the continuation kc. If no lock is associated with the continuation, then arg may be changed at any time; however, if the continuation is being invoked, it may not pick up the change. Otherwise, kcont_setarg must only be called when the associated lock is locked.

  • kmutex_t *kcont_setmutex(kcont_t *kc, kmutex_t *lock);

    Updates the lock associated with the continuation kc and returns the previous lock. If no lock is currently associated with the continuation, then calling this function with a lock other than NULL will trigger an assertion failure. Otherwise, kcont_setmutex must be called only when the existing lock (which will be replaced) is locked. If kcont_setmutex is called as a result of the invokation of func, then after kcont_setmutex has been called but before func returns, the replaced lock must have been released, and the replacement lock, if non-NULL, must be locked upon return.

  • void kcont_setfunc(kcont_t *kc, void (*func)(void *), void *arg);

    Updates func and arg in the continuation kc. If no lock is associated with the continuation, then only arg may be changed. Otherwise, kcont_setfunc must be called only when the associated lock is locked.

  • bool kcont_stop(kcont_t *kc);

    The kcont_stop function stops the timer associated the continuation handle kc. The PENDING and EXPIRED status for the continuation handle is cleared. It is safe to call kcont_stop on a continuation handle that is not pending, so long as it is initialized. kcont_stop will return a non-zero value if the continuation was EXPIRED.

  • bool kcont_pending(kcont_t *kc);

    The kcont_pending function tests the PENDING status of the continuation handle kc. A PENDING continuation is one who's timer has been started and has not expired. Note that it is possible for a continuation's timer to have expired without being invoked if the continuation's lock could not be acquired or there are higher priority threads preventing its invokation. Note that it is only safe to test PENDING status when holding the continuation's lock.

  • bool kcont_expired(kcont_t *kc);

    Tests to see if the continuation's function has been invoked since the last kcont_schedule.

  • bool kcont_active(kcont_t *kc);

  • bool kcont_invoking(kcont_t *kc);

    Tests the INVOKING status of the handle kc. This flag is set just before a continuation's function is being called. Since the scheduling of the worker threads may induce delays, other pending higher-priority code may run before the continuation function is allowed to run. This may create a race condition if this higher-priority code deallocates storage containing one or more continuation structures whose continuation functions are about to be run. In such cases, one technique to prevent references to deallocated storage would be to test whether any continuation functions are in the INVOKING state using kcont_invoking, and if so, to mark the data structure and defer storage deallocation until the continuation function is allowed to run. For this handshake protocol to work, the continuation function will have to use the kcont_ack function to clear this flag.

  • bool kcont_ack(kcont_t *kc);

    Clears the INVOKING state in the continuation handle kc. This is used in situations where it is necessary to protect against the race condition described under kcont_invoking.

  • kcont_wq_t *kcont_workqueue_acquire(pri_t pri, int flags);

    Returns a workqueue that matches the specified criteria. Thus if multiple requesters ask for the same criteria, they are all returned the same workqueue. pri specifies the priority at which the kernel thread which empties the workqueue should run.

    If flags is 0 then the standard operation is required. However, the following flag(s) may be bitwise ORed together:

    • WQ_PERCPU specifies that the workqueue should have a separate queue for each CPU, thus allowing continuations to be invoked on specific CPUs.
  • int kcont_workqueue_release(kcont_wq_t *wq);

    Releases an acquired workqueue. On the last release, the workqueue's resources are freed and the workqueue is destroyed.

Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

This project proposal is a subtask of smp networking.

The goal of this project is to improve the way the processing of incoming packets is handled.

Instead of having a set of active workqueue lwps waiting to service sockets, the kernel should use the lwp that is blocked on the socket to service the workitem. It is not productive being blocked and it has an interest in getting that workitem done, and maybe we can directly copy that data to user's address and avoid queuing in the socket at all.

Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

This project proposal is a subtask of smp networking.

The goal of this project is to remove the ARP, AARP, ISO SNPA, and IPv6 Neighbors from the routing table. Instead, the ifnet structure should have a set of nexthop caches (usually implemented using patricia trees), one per address family. Each nexthop entry should contain the datalink header needed to reach the neighbor.

This will remove cloneable routes from the routing table and remove the need to maintain protocol-specific code in the common Ethernet, FDDI, PPP, etc. code and put it back where it belongs, in the protocol itself.

Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

This project proposal is a subtask of smp networking.

The goal of this project is to make the SYN cache optional. For small systems, this is complete overkill and should be made optional.

Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

This project proposal is a subtask of smp networking and is elegible for funding independently.

The goal of this project is to implement full virtual network stacks. A virtual network stack collects all the global data for an instance of a network stack (excluding AF_LOCAL). This includes routing table, data for multiple domains and their protocols, and the mutexes needed for regulating access to it all. Instead, a brane is an instance of a networking stack.

An interface belongs to a brane, as do processes. This can be considered a chroot(2) for networking, e.g. chbrane(2).

Posted in the wee hours of Wednesday night, November 10th, 2011 Tags:

Design and program a scheme for extending the operating range of 802.11 networks by using techniques like frame combining and error-correcting codes to cope with low S/(N+I) ratio. Implement your scheme in one or two WLAN device drivers -- Atheros & Realtek, say.

This project is on hold due to the conversion project needing to be completed first.

Posted Sunday evening, November 6th, 2011 Tags:

Modern 802.11 NICs provide two or more transmit descriptor rings, one for each priority level or 802.11e access category. Add to NetBSD a generic facility for placing a packet onto a different hardware transmit queue according to its classification by pf or IP Filter. Demonstrate this facility on more than one 802.11 chipset.

This project is on hold due to the conversion project needing to be completed first.

Posted Sunday evening, November 6th, 2011 Tags:

BSD make (aka bmake) uses traditional suffix rules (.c.o: ...) instead of pattern rules like gmake's (%.c:%.o: ...) which are more general and flexible.

The suffix module should be re-written to work from a general match-and-transform function on targets, which is sufficient to implement not only traditional suffix rules and gmake pattern rules, but also whatever other more general transformation rule syntax comes down the pike next. Then the suffix rule syntax and pattern rule syntax can both be fed into this code.

Note that it is already possible to write rules where the source is computed explicitly based on the target, simply by using $(.TARGET) in the right hand side of the rule. Investigate whether this logic should be rewritten using the match-and-transform code, or if the match-and-transform code should use the logic that makes these rules possible instead.

Implementing pattern rules is widely desired in order to be able to read more makefiles written for gmake, even though gmake's pattern rules aren't well designed or particularly principled.

Posted Sunday evening, November 6th, 2011 Tags:
  • Contact: tech-kern
  • Duration estimate: 2-3 months

Currently the buffer handling logic only sorts the buffer queue (aka disksort). In an ideal world it would be able to coalesce adjacent small requests, as this can produce considerable speedups. It might also be worthwhile to split large requests into smaller chunks on the fly as needed by hardware or lower-level software.

Note that the latter interacts nontrivially with the ongoing dynamic MAXPHYS project and might not be worthwhile. Coalescing adjacent small requests (up to some potentially arbitrary MAXPHYS limit) is worthwhile regardless, though.

Posted Sunday evening, November 6th, 2011 Tags:

NetBSD version of compressed cache system (for low-memory devices):

Posted Sunday evening, November 6th, 2011 Tags:

In a file system with symlinks, the file system can be seen as a graph rather than a tree. The meaning of .. potentially becomes complicated in this environment.

There is a fairly substantial group of people, some of them big famous names, who think that the usual behavior (where crossing a symlink is different from entering a subdirectory) is a bug, and have made various efforts from time to time to "fix" it. One such fix can be seen in the -L and -P options to ksh's pwd.

Rob Pike implemented a neat hack for this in Plan 9. It is described in This project is to implement that logic for NetBSD.

Note however that there's another fairly substantial group of people, some of them also big famous names, who think that all of this is a load of dingo's kidneys, the existing behavior is correct, and changing it would be a bug. So it needs to be possible to switch the implementation on and off as per-process state.

Posted Sunday evening, November 6th, 2011 Tags:
  • Contact: tech-kern
  • Duration estimate: 2-3 months

The ext2 file system is the lowest common denominator Unix-like file system in the Linux world, as ffs is in the BSD world. NetBSD has had kernel support for ext2 for quite some time.

However, the Linux world has moved on, with ext3 and now to some extent also ext4 superseding ext2 as the baseline. NetBSD has no support for ext3; the goal of this project is to implement that support.

Since ext3 is a backward-compatible extension that adds journaling to ext2, NetBSD can mount clean ext3 volumes as ext2 volumes. However, NetBSD cannot mount ext3 volumes with journaling and it cannot handle recovery for crashed volumes. As ext2 by itself provides no crash recovery guarantees whatsoever, this journaling support is highly desirable.

The ext3 support should be implemented by extending the existing ext2 support (which is in src/sys/ufs/ext2fs), not by rewriting the whole thing over from scratch. It is possible that some of the ostensibly filesystem-independent code that was added along with the ffs WAPBL journaling extensions might be also useable as part of an ext3 implementation; but it also might not be.

The full requirements for this project include complete support for ext3 in both the kernel and the userland tools. It is possible that a reduced version of this project with a clearly defined subset of these requirements could still be a viable GSOC project; if this appeals to you please coordinate with a prospective mentor. Be advised, however, that several past ext3-related GSOC projects have failed; it is a harder undertaking than you might think.

An additional useful add-on goal would be to audit the locking in the existing ext2 code; the ext2 code is not tagged MPSAFE, meaning it uses a biglock on multiprocessor machines, but it is likely that either it is in fact already safe and just needs to be tagged, or can be easily fixed. (Note that while this is not itself directly related to implementing ext3, auditing the existing ext2 code is a good way to become familiar with it.)

Posted Sunday evening, November 6th, 2011 Tags:

Implement a flash translation layer.

A flash translation layer does block remapping, translating from visible block addresses used by a file system to physical cells on one or more flash chips. This provides wear leveling, which is essential for effective use of flash, and also typically some amount of read caching and write buffering. (And it takes care of excluding cells that have gone bad.)

This allows FFS, LFS, msdosfs, or whatever other conventional file system to be used on raw flash chips. (Note that SSDs and USB flash drives and so forth contain their own FTLs.)

FTLs involve quite a bit of voodoo and there is a lot of prior art and research; do not just sit down and start coding.

There are also some research FTLs that we might be able to get the code for; it is probably worth looking into this.

Note that NAND flash and NOR flash are different and need different handling, and the various cell types and other variations also warrant different policy choices.

The degree of overprovisioning (that is, the ratio of the raw capacity of the flash chips to the advertised size of the resulting device) should be configurable as this is a critical factor for performance.

Making the device recoverable rather than munching itself in system crashes or power failures is a nice extra, although apparently the market considers this an optional feature for consumer devices.

The flash translation layer should probably be packaged a driver that attaches to one or more flash chips and provides a disk-type block/character device pair.

Posted Sunday evening, November 6th, 2011 Tags:

Use puffs or refuse to write an imapfs that you can mount on /var/mail, either by writing a new one or porting the old existing Plan 9 code that does this.

Note: there might be existing solutions, please check upfront and let us know.

Posted Sunday evening, November 6th, 2011 Tags:
  • Contact: tech-kern
  • Duration estimate: 4 months and up

There are many caches in the kernel. Most of these have knobs and adjustments, some exposed and some not, for sizing and writeback rate and flush behavior and assorted other voodoo, and most of the ones that aren't adjustable probably should be.

Currently all or nearly all of these caches operate on autopilot independent of the others, which does not necessarily produce good results, especially if the system is operating in a performance regime different from when the behavior was tuned by the implementors.

It would be nice if all these caches were instead coordinated, so that they don't end up fighting with one another. Integrated control of sizing, for example, would allow explicitly maintaining a sensible balance between different memory uses based on current conditions; right now you might get that, depending on whether the available voodoo happens to work adequately under the workload you have, or you might not. Also, it is probably possible to define some simple rules about eviction, like not evicting vnodes that have UVM pages still to be written out, that can help avoid unnecessary thrashing and other adverse dynamic behavior. And similarly, it is probably possible to prefetch some caches based on activity in others. It might even be possible to come up with one glorious unified cache management algorithm.

Also note that cache eviction and prefetching is fundamentally a form of scheduling, so all of this material should also be integrated with the process scheduler to allow it to make more informed decisions.

This is a nontrivial undertaking.

Step 1 is to just find all the things in the kernel that ought to participate in a coordinated caching and scheduling scheme. This should not take all that long. Some examples include:

  • UVM pages
  • file system metadata buffers
  • VFS name cache
  • vnode cache
  • size of the mbuf pool

Step 2 is to restructure and connect things up so that it is readily possible to get the necessary information from all the random places in the kernel that these things occupy, without making a horrible mess and without trashing system performance in the process or deadlocking out the wazoo. This is not going to be particularly easy or fast.

Step 3 is to take some simple steps, like suggested above, to do something useful with the coordinated information, and hopefully to show via benchmarks that it has some benefit.

Step 4 is to look into more elaborate algorithms for unified control of everything. The previous version of this project cited IBM's ARC ("Adaptive Replacement Cache") as one thing to look at. (But note that ARC may be encumbered -- someone please check on that and update this page.) Another possibility is to deploy machine learning algorithms to look for and exploit patterns.

Note: this is a serious research project. Step 3 will yield a publishable minor paper; step 4 will yield a publishable major paper if you manage to come up with something that works, and it quite possibly contains enough material for a PhD thesis.

Posted Sunday evening, November 6th, 2011 Tags:

Add support for Apple's extensions to ISO9660 to makefs, especially the ability to label files with Type & Creator IDs. See

Posted Sunday evening, November 6th, 2011 Tags:
  • Contact: tech-kern
  • Duration estimate: 1 year for port; 3 years for rewrite by one developer

Implement a BSD licensed JFS. A GPL licensed implementation of JFS is available at

Alternatively, or additionally, it might be worthwhile to do a port of the GPL code and allow it to run as a kernel module.

Posted Sunday evening, November 6th, 2011 Tags:

Today a number of OS provide some form of kernel-level virtualization that offer better isolation mechanisms that the traditional (yet more portable) &chroot(2). Currently, NetBSD lacks functionality in this field; there have been multiple attempts (gaols, mult) to implement a jails-like system, but none so far has been integrated in base.

The purpose of this project is to study the various implementations found elsewhere (FreeBSD Jails, Solaris Zones, Linux Containers/VServers, ...), and eventually see their plus/minus points. An additional step would be to see how this can be implemented the various architectural improvements NetBSD gained, especially rump(3) and kauth(9).

Caution: this is a research project.

Posted Sunday evening, November 6th, 2011 Tags:

kernfs is a virtual file system that reports information about the running system, and in some cases allows adjusting this information. procfs is a virtual file system that provides information about currently running processes. Both of these file systems work by exposing virtual files containing textual data.

The current implementations of these file systems are redundant and both are non-extensible. For example, kernfs is a hardcoded table that always exposes the same set of files; there is no way to add or remove entries on the fly, and even adding new static entries is a nuisance. procfs is similarly limited; there is no way to add additional per-process data on the fly. Furthermore, the current code is not modular, not well designed, and has been a source of security bugs in the past.

We would like to have a new implementation for both of these file systems that rectifies these problems and others, as outlined below:

  • kernfs and procfs should share most of their code, and in particular they should share all the code for managing lists of virtual files. They should remain separate entities, however, at least from the user perspective: community consensus is that mixing system and per-process data, as Linux always has, is ugly.

  • It should be possible to add and remove entries on the fly, e.g. as modules are loaded and unloaded.

  • Because userlevel programs can become dependent on the format of the virtual files (Linux has historically had compatibility problems because of this) they should if possible not have complex formats at all, and if they do the format should be clearly specifiable in some way that isn't procedural code. (This makes it easier to reason about, and harder for it to get changed by accident.)

  • There is an additional interface in the kernel for retrieving and adjusting arbitrary kernel information: sysctl. Currently the sysctl code is a third completely separate mechanism, on many points redundant with kernfs and/or procfs. It is somewhat less primitive, but the current implementation is cumbersome and not especially liked. Integrating kernfs and procfs with sysctl (perhaps somewhat like the Linux sysfs) is not automatically the right design choice, but it is likely to be a good idea. At a minimum we would like to be able to have one way to handle reportable/adjustable data within the kernel, so that kernfs, procfs, and/or sysctl can be attached to any particular data element as desired.

  • While most of the implementations of things like procfs and sysctl found in the wild (including the ones we currently have) work by attaching callbacks, and then writing code all over the kernel to implement the callback API, it is possible to design instead to attach data, that is, pointers to variables within the kernel, so that the kernfs/procfs or sysctl code itself takes responsibility for fetching that data. Please consider such a design strongly and pursue it if feasible, as it is much tidier. (Note that attaching data also in general requires specifying a locking model and, for writeable data, possibly a condition variable to signal on when the value changes and/or a mechanism for checking new values for validity.)

It is possible that using tmpfs as a backend for kernfs and procfs, or sharing some code with tmpfs, would simplify the implementation. It also might not. Consider this possibility, and assess the tradeoffs; do not treat it as a requirement.

Alternatively, investigate FreeBSD's pseudofs and see if this could be a useful platform for this project and base for all the file systems mentioned above.

When working on this project, it is very important to write a complete regression test suite for procfs and kernfs beforehand to ensure that the rewrites do not create incompatibilities.

Posted Sunday evening, November 6th, 2011 Tags:

Improve on the Kismet design and implementation in a Kismet replacement for BSD.

Posted Sunday evening, November 6th, 2011 Tags:

Libvirt is a project that aims at bringing yet-another-level of abstraction to the management of different virtualization technologies; it supports a wide range of virtualization technologies, like Xen, VMWare, KVM and containers.

A package for libvirt was added to pkgsrc under sysutils/libvirt, however it requires more testing before all platforms supported by pkgsrc can also seamlessly support libvirt.

The purpose of this project is to investigate what is missing in libvirt (in terms of patches or system integration) so it can work out-of-the-box for platforms that can benefit from it. GNU/Linux, NetBSD, FreeBSD and Solaris are the main targets.

Posted Sunday evening, November 6th, 2011 Tags:

While NetBSD has had LiveCDs for a while, there has not yet been a LiveCD that allows users to install NetBSD after test-driving it. A LiveCD that contains a GUI based installer and reliably detects the platforms features would be very useful.

Posted Sunday evening, November 6th, 2011 Tags:

Apply statistical AI techniques to the problem of monitoring the logs of a busy system. Can one identify events of interest to a sysadmin, or events that merit closer inspection? Failing that, can one at least identify some events as routine and provide a filtered log that excludes them? Also, can one group a collection of related messages together into a single event?

Posted Sunday evening, November 6th, 2011 Tags:

NetBSD currently requires a system with an MMU. This obviously limits the portability. We'd be interested in an implementation/port of NetBSD on/to an MMU-less system.

Posted Sunday evening, November 6th, 2011 Tags:

The policy code in the kernel that controls file caching and readahead behavior is necessarily one-size-fits-all, and the knobs available to applications to tune it, like madvise() and posix_fadvise(), are fairly blunt hammers. Furthermore, it has been shown that the overhead from user<->kernel domain crossings makes syscall-driven fine-grained policy control ineffective. (Though, that was shown in the past when processors were much slower relative to disks and it may not be true any more.)

Is it possible to use BPF, or create a different BPF-like tool (that is, a small code generator with very simple and very clear safety properties) to allow safe in-kernel fine-grained policy control?

Caution: this is a research project.

Posted Sunday evening, November 6th, 2011 Tags:
  • Contact: tech-net
  • Duration estimate: 3 months

Implement the ability to route based on properties like QoS label, source address, etc.

Posted Sunday evening, November 6th, 2011 Tags:
  • Contact: tech-net
  • Duration estimate: 2 months

Write tests for the routing code and re-factor. Use more and better-named variables.

PCBs and other structures are sprinkled with route caches (struct route). Sometimes they do not get updated when they should. It's necessary to modify rtalloc(), at least. Fix that. Note XXX in rtalloc(); this may mark a potential memory leak!

Posted Sunday evening, November 6th, 2011 Tags:

Create/modify a 802.11 link-adaptation module that works at least as well as SampleRate, but is generic enough to be re-used by device drivers for ADMtek, Atheros, Intel, Ralink, and Realtek 802.11 chips. Make the module export a link-quality metric (such as ETT) suitable for use in linkstate routing. The SampleRate module in the Atheros device driver sources may be a suitable starting point.

Posted Sunday evening, November 6th, 2011 Tags:

Currently booting a sgimips machine requires different boot commands depending on the architecture. It is not possible to use the firmware menu to boot from CD.

An improved primary bootstrap should ask the firmware for architecture detail, and automatically boot the correct kernel for the current architecture by default.

A secondary objective of this project would be to rearrange the generation of a bootably CD image so that it could just be loaded from the firmware menu without going through the command monitor.

Posted Sunday evening, November 6th, 2011 Tags:

NetBSD/sgimips currently runs on a number of SGI hardware, but support for IP27 (Origin) and IP30 (Octane) is not yet available.

See also NetBSD/sgimips.

Posted Sunday evening, November 6th, 2011 Tags:

NetBSD/sgimips currently runs on O2s with R10k (or similar) CPUs, but for example speculative loads are not handled correctly. It is unclear if this is pure kernel work or the toolchain needs to be changed too.

Currently softfloat is used, and bugs seem to exist in the hardware float support. Resolving these bugs and switching to hardware float would improve performance.

See also NetBSD/sgimips.

Posted Sunday evening, November 6th, 2011 Tags:

Certain real time chips, and other related power hardware, have a facility within them to allow the kernel to set a specific time and date at which time the machine will power itself on. One such chip is the DS1685 RTC. A kernel API should be developed to allow such devices to have a power-on-time set from userland. Additionally, the API should be made available through a userland program, or added to an existing utility, such as halt(8).

It may also be useful to make this a more generic interface, to allow for configuring other devices, such as Wake-On-Lan ethernet chips, to turn them on/off, set them up, etc.

Posted Sunday evening, November 6th, 2011 Tags:
  • Contact: tech-kern
  • Duration estimate: 8-12 months

Remove the residual geometry code and datastructures from FFS (keep some kind of allocation groups but without most of what cylinder groups now have) and replace blocks and fragments with extents, yielding a much simpler filesystem well suited for modern disks.

Note that this results in a different on-disk format and will need to be a different file system type.

The result would be potentially useful to other operating systems beyond just NetBSD, since UFS/FFS is used in so many different kernels.

Posted Sunday evening, November 6th, 2011 Tags:

It would be nice to support these newer highly SMP processors from Sun. A Linux port already exists, and Sun has contributed code to the FOSS community.

(Some work has already been done and committed - see

Posted Sunday evening, November 6th, 2011 Tags:

syspkgs is the concept of using pkgsrc's pkg_* tools to maintain the base system. That is, allow users to register and components of the base system with more ease.

There has been a lot of work in this area already, but it has not yet been finalized. Which is a diplomatic way of saying that this project has been attempted repeatedly and failed every time.

Posted Sunday evening, November 6th, 2011 Tags:
Posted Sunday evening, November 6th, 2011 Tags:

Add memory-efficient snapshots to tmpfs. A snapshot is a view of the filesystem, frozen at a particular point in time. The snapshotted filesystem is not frozen, only the view is. That is, you can continue to read/write/create/delete files in the snapshotted filesystem.

The interface to snapshots may resemble the interface to null mounts, e.g., 'mount -t snapshot /var/db /db-snapshot' makes a snapshot of /var/db/ at /db-snapshot/.

You should exploit features of the virtual memory system like copy-on-write memory pages to lazily make copies of files that appear both in a live tmpfs and a snapshot. This will help conserve memory.

Posted Sunday evening, November 6th, 2011 Tags:

While we now have mandoc for handling man pages, we currently still need groff in the tree to handle miscellaneous docs that are not man pages.

This is itself an inadequate solution as the groff we have does not support PDF output (which in this day and age is highly desirable) ... and while newer groff does support PDF output it does so via a Perl script. Also, importing a newer groff is problematic for assorted other reasons.

We need a way to typeset miscellaneous articles that we can import into base and that ideally is BSD licensed. (And that can produce PDFs.) Currently it looks like there are three decent ways forward:

  • Design a new roff macro package that's comparable to mdoc (e.g. supports semantic markup) but is for miscellaneous articles rather than man pages, then teach mandoc to handle it.

  • Design a new set of markup tags comparable to mdoc (e.g. supports semantic markup) but for miscellaneous articles, and a different less ratty syntax for it, then teach mandoc to handle this.

  • Design a new set of markup tags comparable to mdoc (e.g. supports semantic markup) but for miscellaneous articles, and a different less ratty syntax for it, and write a new program akin to mandoc to handle it.

These are all difficult and a lot of work, and in the case of new syntax are bound to cause a lot of shouting and stamping. Also, many of the miscellaneous documents use various roff preprocessors and it isn't clear how much of this mandoc can handle.

None of these options is particularly appealing.

There are also some less decent ways forward:

  • Pick one of the existing roff macro packages for miscellaneous articles (ms, me, ...) and teach mandoc to handle it. Unfortunately all of these macro packages are pretty ratty, they're underpowered compared to mdoc, and none of them support semantic markup.

  • Track down one of the other older roff implementations, that are now probably more or less free (e.g. ditroff), then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, any of these programs are likely to be old nasty code that needs a lot of work.

  • Teach the groff we have how to emit PDFs, then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, this will likely be pretty nasty work and it's still got the wrong license.

  • Rewrite groff as BSD-licensed code and provide support for generating PDFs, then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, this is a horrific amount of work.

  • Try to make something else do what we want. Unfortunately, TeX is a nonstarter and the only other halfway realistic candidate is lout... which is GPLv3 and at least at casual inspection looks like a horrible mess of its own.

These options are even less appealing.

Maybe someone can think of a better idea. There are lots of choices if we give up on typeset output, but that doesn't seem like a good plan either.

Posted Sunday evening, November 6th, 2011 Tags:

Due to the multitude of supported machine architectures NetBSD has to deal with many different partitioning schemes. To deal with them in a uniform way (without imposing artificial restrictions that are not enforced by the underlying firmware or bootloader partitioning scheme) wedges have been designed.

While the kernel part of wedges is mostly done (and missing parts are easy to add), a userland tool to edit wedges and to synthesize defaults from (machine/arch dependent) on-disk content is needed.

Posted Sunday evening, November 6th, 2011 Tags:

Create an easy to use wifi setup widget for NetBSD: browse and select networks in the vicinity by SSID, BSSID, channel, etc.

The guts should probably be done as a library so that user interfaces of increasing slickness can be built on top of it as desired. (That is: there ought to be some form of this in base; but a nice looking gtk interface version would be good to have as well.)

Posted Sunday evening, November 6th, 2011 Tags:

Add socket options to NetBSD for controlling WLAN transmit parameters like transmit power, fragmentation threshold, RTS/CTS threshold, bitrate, 802.11e access category, on a per-socket and per-packet basis. To set transmit parameters, pass radiotap headers using sendmsg(2) and setsockopt(2).

This project is on hold due to the conversion project needing to be completed first.

Posted Sunday evening, November 6th, 2011 Tags:

The NetBSD website building infrastructure is rather complex and requires significant resources. We need to make it easier for anybody to contribute without having to install a large number of complex applications from pkgsrc or without having to learn the intricacies of the build process.

A more detailed description of the problem is described in this and this email and the following discussion on the netbsd-docs.

This work requires knowledge of XML, XSLT and make. This is not a request for visual redesign of the website.

Posted Sunday evening, November 6th, 2011 Tags:

With the push of virtualization, the x86 world started recently to gain a more widespread attention towards supporting IOMMUs; similar to MMUs that translate virtual addresses into physical ones, an IOMMU translates device/bus addresses into physical addresses. The purpose of this project is to add AMD and Intel IOMMU support in NetBSD's machine-independent bus abstraction layers and bus.dma(9).

Posted Sunday evening, November 6th, 2011 Tags:

Latest Xen versions come with a number of features that are currently not supported by NetBSD: USB/VGA passthrough, RAS (Reliability, Availability and Serviceability) options - CPU and memory hotpluging - , Fault tolerancy with Remus, and debugging with gdbx (lightweight debugger included with Xen).

The purpose of this project is to add the missing parts inside NetBSD. Most of the work is composed of smaller components that can be worked on independently from others.

Posted Sunday evening, November 6th, 2011 Tags:
  • Contact: tech-kern
  • Duration estimate: 1 year for port; 3 years for rewrite by one developer

Implement a BSD licensed XFS. A GPL licensed implementation of XFS is available at

Alternatively, or additionally, it might be worthwhile to do a port of the GPL code and allow it to run as a kernel module.

See also FreeBSD's port.

Posted Sunday evening, November 6th, 2011 Tags:

Enhance zeroconfd, the Multicast DNS daemon, that was begun in NetBSD's Google Summer of Code 2005 (see work in progress: Develop a client library that lets a process publish mDNS records and receive asynchronous notification of new mDNS records. Adapt zeroconfd to use event(3) and queue(3). Demonstrate comparable functionality to the GPL or APSL alternatives (Avahi, Howl, ...), but in a smaller storage footprint, with no dependencies outside of the NetBSD base system.

Posted Sunday evening, November 6th, 2011 Tags:

Traditionally, the NetBSD kernel code had been protected by a single, global lock. This lock ensured that, on a multiprocessor system, two different threads of execution did not access the kernel concurrently and thus simplified the internal design of the kernel. However, such design does not scale to multiprocessor machines because, effectively, the kernel is restricted to run on a single processor at any given time.

The NetBSD kernel has been modified to use fine grained locks in many of its different subsystems, achieving good performance on today's multiprocessor machines. Unfotunately, these changes have not yet been applied to the networking code, which remains protected by the single lock. In other words: NetBSD networking has evolved to work in a uniprocessor envionment; switching it to use fine-grained locked is a hard and complex problem.

This project is currently claimed


At this time, The NetBSD Foundation is accepting project specifications to remove the single networking lock. If you want to apply for this project, please send your proposal to the contact addresses listed above.

Due to the size of this project, your proposal does not need to cover everything to qualify for funding. We have attempted to split the work into smaller units, and you can submit funding applications for these smaller subtasks independently as long as the work you deliver fits in the grand order of this project. For example, you could send an application to make the network interfaces alone MP-friendly (see the work plan below).

What follows is a particular design proposal, extracted from an original text written by Matt Thomas. You may choose to work on this particular proposal or come up with your own.

Tentative specification

The future of NetBSD network infrastructure has to efficiently embrace two major design criteria: Symmetric Multi-Processing (SMP) and modularity. Other design considerations include not only supporting but taking advantage of the capability of newer network devices to do packet classification, payload splitting, and even full connection offload.

You can divide the network infrastructure into 5 major components:

  • Interfaces (both real devices and pseudo-devices)
  • Socket code
  • Protocols
  • Routing code
  • mbuf code.

Part of the complexity is that, due to the monolithic nature of the kernel, each layer currently feels free to call any other layer. This makes designing a lock hierarchy difficult and likely to fail.

Part of the problem are asynchonous upcalls, among which include:

  • ifa->ifa_rtrequest for route changes.
  • pr_ctlinput for interface events.

Another source of complexity is the large number of global variables scattered throughout the source files. This makes putting locks around them difficult.


The proposed solution presented here include the following tasks (in no particular order) to achieve the desired goals of SMP support and modularity:

Work plan

Aside from the list of tasks above, the work to be done for this project can be achieved by following these steps:

  1. Move ARP out of the routing table. See the nexthop cache project.

  2. Make the network interfaces MP, which are one of the few users of the big kernel lock left. This needs to support multiple receive and transmit queues to help reduce locking contention. This also includes changing more of the common interfaces to do what the tsec driver does (basically do everything with softints). This also needs to change the *_input routines to use a table to do dispatch instead of the current switch code so domain can be dynamically loaded.

  3. Collect global variables in the IP/UDP/TCP protocols into structures. This helps the following items.

  4. Make IPV4/ICMP/IGMP/REASS MP-friendly.

  5. Make IPV6/ICMP/IGMP/ND MP-friendly.

  6. Make TCP MP-friendly.

  7. Make UDP MP-friendly.

Radical thoughts

You should also consider the following ideas:

LWPs in user space do not need a kernel stack

Those pages are only being used in case the an exception happens. Interrupts are probably going to their own dedicated stack. One could just keep a set of kernel stacks around. Each CPU has one, when a user exception happens, that stack is assigned to the current LWP and removed as the active CPU one. When that CPU next returns to user space, the kernel stack it was using is saved to be used for the next user exception. The idle lwp would just use the current kernel stack.

LWPs waiting for kernel condition shouldn't need a kernel stack

If an LWP is waiting on a kernel condition variable, it is expecting to be inactive for some time, possibly a long time. During this inactivity, it does not really need a kernel stack.

When the exception handler get an usermode exeception, it sets LWP restartable flag that indicates that the exception is restartable, and then services the exception as normal. As routines are called, they can clear the LWP restartable flag as needed. When an LWP needs to block for a long time, instead of calling cv_wait, it could call cv_restart. If cv_restart returned false, the LWPs restartable flag was clear so cv_restart acted just like cv_wait. Otherwise, the LWP and CV would have been tied together (big hand wave), the lock had been released and the routine should have returned ERESTART. cv_restart could also wait for a small amount of time like .5 second, and only if the timeout expires.

As the stack unwinds, eventually, it would return to the last the exception handler. The exception would see the LWP has a bound CV, save the LWP's user state into the PCB, set the LWP to sleeping, mark the lwp's stack as idle, and call the scheduler to find more work. When called, cpu_switchto would notice the stack is marked idle, and detach it from the LWP.

When the condition times out or is signalled, the first LWP attached to the condition variable is marked runnable and detached from the CV. When the cpu_switchto routine is called, the it would notice the lack of a stack so it would grab one, restore the trapframe, and reinvoke the exception handler.

Posted late Sunday afternoon, November 6th, 2011 Tags:

Refactor utilities in the base system, such as netstat, top, and vmstat, that format & display tables of statistics.

One possible refactoring divides each program into three:

  • one program reads the statistics from the kernel and writes them in machine-readable form to its standard output
  • a second program reads machine-readable tables from its standard input and writes them in a human-readable format to the standard output
  • and a third program supervises a pipeline of the first two programs, browses and refreshes a table.

Several utilities will share the second and third program.

Posted late Saturday night, November 6th, 2011 Tags:

Currently, the puffs(3) interface between the kernel and userspace uses various system structures for passing information. Examples are struct stat and struct uucred. If these change in layout (such as with the time_t size change in NetBSD 6.0), old puffs servers must be recompiled.

The project milestones are:

  • define a binary-independent protocol
  • implement support
  • measure the performance difference with direct kernel struct passing
  • if there is a huge difference, investigate the possibility for having both an internal and external protocol. The actual decision to include support will be made on the relative complexity of the code for dual support.

While this project will be partially implemented in the kernel, it is fairly well-contained and prior kernel experience is not necessary.

If there is time and interest, a suggested subproject is making sure that p2k(3) does not suffer from similar issues. This is a required subproject if dual support as mentioned above is not necessary.

Posted late Saturday night, November 6th, 2011 Tags:

Write a script that aids refactoring C programs by extracting subroutines from fragments of C code.

Do not reinvent the wheel: wherever possible, use existing technology for the parsing and comprehension of C code. Look at projects such as sparse and Coccinelle.

Your script should work as a filter that puts the free variables at the top, encloses the rest in curly braces, does something helpful with break, continue, and return statements.

That's just a start.

This project is tagged "hard" because it's not clearly specified. The first step (like with many projects) is to work out the specification in more detail.

Posted late Saturday night, November 6th, 2011 Tags:
  • Contact: tech-pkg
  • Duration estimate: 3 months

The goal of this project is to generate a package or packages that will set up a cross-compiling environment for one (slow) NetBSD architecture on another (fast) NetBSD architecture, starting with (and using) the NetBSD toolchain in src.

The package will require a checked out NetBSD src tree (or as a refinement, parts thereof) and is supposed to generate the necessary cross-building tools using src/ with appropriate flags; for the necessary /usr/include and libraries of the slow architecture, build these to fit or use the slow architecture's base.tgz and comp.tgz (or parts thereof). As an end result, you should e.g. be able to install the binary cross/pkgsrc-NetBSD-amd64-to-atari package on your fast amd64 and start building packages for your slow atari system.

Use available packages, like eg pkgtools/pkg_comp to build the cross-compiling environment where feasible.

As test target for the cross-compiling environment, use pkgtools/pkg_install, which is readily cross-compilable by itself.

If time permits, test and fix cross-compiling pkgsrc, which was made cross-compilable in an earlier GSoC project, but may have suffered from feature rot since actually cross-compiling it has been too cumbersome for it to see sufficient use.

Posted late Saturday night, November 6th, 2011 Tags:

pkgsrc duplicates NetBSD efforts in maintaining packages. pkgsrc X11, being able to cross-build, could replace base X11 distribution.

The latter decoupling should simplify maintainance of software; updating X11 and associated software becomes easier because of pkgsrc's shorter release cycle and more volatility.

Cross-buildable and cross-installable pkgsrc tools could also simplify maintainance of slower systems by utilising power of faster ones.

The goal of this project is to make it possible to bootstrap pkgsrc using available cross-build tools (e.g. NetBSD's).

This project requires good understanding of cross-development, some knowledge of NetBSD build process or ability to create cross-development toolchain, and familiarity with pkgsrc bootstrapping.

Note: basic infrastructure for this exists as part of various previous GSoC projects. General testing is lacking.

Posted late Saturday night, November 6th, 2011 Tags:
  • Contact: tech-kern
  • Duration estimate: 3 months

Make NetBSD behave gracefully when a "live" USB/FireWire disk drive is accidentally detached and re-attached by, for example, creating a virtual block device that receives block-read/write commands on behalf of the underlying disk driver.

This device will delegate reads and writes to the disk driver, but it will keep a list of commands that are "outstanding," that is, reads that the disk driver has not completed, and writes that have not "hit the platter," so to speak.


  • Provide a character device for userland to read indications that a disk in use was abruptly detached.
  • Following disk re-attachment, the virtual block device replays its list of outstanding commands. A correct solution will not replay commands to the wrong disk if the removable was replaced instead of re-attached.

Open questions: Prior art? Isn't this how the Amiga worked? How will this interact with mount/unmount—is there a use-count on devices? Can you leverage "wedges" in your solution? Does any/most/all removable storage indicate reliably when a block written has actually reached the medium?

Posted late Saturday night, November 6th, 2011 Tags:

Write a program that can read an email, infer quoting conventions, discern bottom-posted emails from top-posted emails, and rewrite the email using the conventions that the reader prefers. Then, take it a step further: write a program that can distill an entire email discussion thread into one document where every writer's contribution is properly attributed and appears in its proper place with respect to interleaved quotations.

Posted late Saturday night, November 6th, 2011 Tags:

Design and implement a mechanism that allows for fast user level access to kernel time data structures for NetBSD. For certain types of small data structures the system call overhead is significant. This is especially true for frequently invoked system calls like clock_gettime(2) and gettimeofday(2). With the availability of user level readable high frequency counters it is possible to create fast implementations for precision time reading. Optimizing clock_gettime(2) and alike will reduce the strain from applications frequently calling these system calls and improves timing information quality for applications like NTP. The implementation would be based on a to-be-modified version of the timecounters implementation in NetBSD.


  • Produce optimizations for clock_gettime
  • Produce optimizations for gettimeofday
  • Show benchmarks before and after
  • start evolving timecounters in NetBSD, demonstrating your improvements

See also the Paper on Timecounters by Poul-Henning Kamp.

Posted late Saturday night, November 6th, 2011 Tags:

The existing puffs protocol gives a way to forward kernel-level file system actions to a userspace process. This project generalizes that protocol to allow forwarding file system actions arbitrarily across a network. This will make it possible to mount any kernel file system type from any location on the network, given a suitable arrangement of components.

The file system components to be used are puffs and rump. puffs is used to forward local file system requests from the kernel to userspace and rump is used to facilitate running the kernel file system in userspace as a service daemon.

The milestones are the following:

  • Write the necessary code to be able to forward requests from one source to another. This involves most likely reworking a bit of the libpuffs option parsing code and creating a puffs client (say, mount_puffs) to be able to forward requests from one location to another. The puffs protocol should be extended to include the necessary new features or a new protocol invented.

    Proof-of-concept code for this has already been written. (Where is it?)

  • Currently the puffs protocol used for communication between the kernel and userland is machine dependent. To facilitate forwarding the protocol to remote hosts, a machine independent version must be specified.

  • To be able to handle multiple clients, the file systems must be converted to daemons instead of being utilities. This will also, in the case of kernel file system servers, include adding locking to the communication protocol.

The end result will look something like this:

# start serving ffs from /dev/wd0a on port 12675
onehost> ffs_serv -p 12675 /dev/wd0a
# start serving cd9660 from /dev/cd0a on port 12676
onehost> cd9660_serv -p 12675 /dev/cd0a

# meanwhile in outer space, mount anotherhost from port 12675
anotherhost> mount_puffs -t tcp onehost:12675 /mnt
anotherhost> mount
anotherhost:12675 on /mnt type <negotiated>
anotherhost> cd /mnt
anotherhost> ls
  ... etc

The implementor should have some familiarity with file systems and network services.

Any proposal should include answers to at least the following questions:

  • How is this different from NFS?

  • How is the protocol different from 9p?

  • How is this scheme going to handle the hard things that NFS doesn't do very well, such as distributed cache consistency?

  • Given industry trends, why is this project proposing a new protocol instead of conforming to the SOA model?

Posted late Saturday night, November 6th, 2011 Tags:

inetd is a classic method for launching network programs on-the-fly and some of its ideas are coming back into vogue. Enhancing this daemon should include investigations into other similar systems in other operating systems.

Primary milestones:

  • Prefork: Support pre-forking multiple children and keeping them alive for multiple invocations.
  • Per service configuration file: Add a per-service configuration file similar to xinetd.
  • Make the rate-limiting feature configurable on a per-service basis.
  • Improve the logging and make logging modes configurable on a per-service basis.

Nice to have:

  • Add include directives to the configuration language to allow service definitions to be installed in /usr/share or /usr/pkg/share.
  • Add a separate way to turn services on and off, so they can be defined statically (such as in /usr/share) and turned on and off from /etc.
  • Allow non-privileged users to add/remove/change their own services using a separate utility.
  • Integrate with the new blocklist daemon.
  • Configuration compatibility for systemd socket activations
Posted late Saturday night, November 6th, 2011 Tags:

Produce lessons and a list of affordable parts and free software that NetBSD hobbyists can use to teach themselves JTAG. Write your lessons for a low-cost embedded system or expansion board that NetBSD already supports.

Posted late Saturday night, November 6th, 2011 Tags:

Launchd is a MacOS/X utility that is used to start and control daemons similar to init(8), but much more powerful. There was an effort to port launchd to FreeBSD, but it seems to be abandoned. We should first investigate what happened to the FreeBSD effort to avoid duplicating work. The port is not trivial because launchd uses a lot of mach features.


  • report of FreeBSD efforts (past and present)
  • launchd port replacing: init
  • launchd port replacing: rc
  • launchd port compatible with: rc.d scripts
  • launchd port replacing: watchdogd

Nice to have:

  • launchd port replacing/integrating: inetd
  • launchd port replacing: atd
  • launchd port replacing: crond
  • launchd port replacing: (the rest)
Posted late Saturday night, November 6th, 2011 Tags:

Design and implement a general API for control of LED and LCD type devices on NetBSD. The API would allow devices to register individual LED and LCD devices, along with a set of capabilities for each one. Devices that wish to display status via an LED/LCD would also register themselves as an event provider. A userland program would control the wiring of each event provider, to each output indicator. The API would need to encompass all types of LCD displays, such as 3 segment LCDs, 7 segment alphanumerics, and simple on/off state LED's. A simple example is a keyboard LED, which is an output indicator, and the caps-lock key being pressed, which is an event provider.

There is prior art in OpenBSD; it should be checked for suitability, and any resulting API should not differ from theirs without reason.


  • a port of OpenBSD's LED tools
  • a userland tool to control LED
  • demonstration of functionality
  • integration into NetBSD
Posted late Saturday night, November 6th, 2011 Tags:

Write a library of routines that estimate the expected transmission duration of a queue of 802.3 or 802.11 frames based on the current bit rate, applicable overhead, and the backoff level. The kernel will use this library to help it decide when packet queues have grown too long in order to mitigate "buffer bloat."

Posted late Saturday night, November 6th, 2011 Tags:

The current lpr/lpd system in NetBSD is ancient, and doesn't support modern printer systems very well. Interested parties would do a from scratch rewrite of a new, modular lpr/lpd system that would support both the old lpd protocol and newer, more modern protocols like IPP, and would be able to handle modern printers easily.

This project is not intrinsically difficult, but will involve a rather large chunk of work to complete.

Note that the goal of this exercise is not to reimplement cups -- cups already exists and one like it was enough.

Some notes:

It seems that a useful way to do this would be to divide the printing system in two: a client-side system, which is user-facing and allows submitting print jobs to arbitrary print servers, and a server-side system, which implements queues and knows how to talk to actual printer devices. In the common case where you don't have a local printer but use printers that are out on the network somewhere, the server-side system wouldn't be needed at all. When you do have a local printer, the client-side system would submit jobs to the local server-side system using the lpr protocol (or IPP or something else) over a local socket but otherwise treat it no differently from any other print server.

The other important thing moving forward: lpr needs to learn about MIME types and accept an argument to tell it the MIME types of its input files. The current family of legacy options lpr accepts for file types are so old as to be almost completely useless; meanwhile the standard scheme of guessing file types inside the print system is just a bad design overall. (MIME types aren't great but they're what we have.)

Posted late Saturday night, November 6th, 2011 Tags:

swcrypto could use a variety of enhanacements


  • use multiple cores efficiently (that already works reasonably well for multiple request streams)
  • use faster versions of complex transforms (CBC, counter modes) from our in-tree OpenSSL or elsewhere (eg libtomcrypt)
  • add support for asymmetric operations (public key)

Extra credit:

  • Tie public-key operations into veriexec somehow for extra credit (probably a very good start towards an undergrad thesis project).
Posted late Saturday night, November 6th, 2011 Tags:

Change infrastructure so that dependency information (currently in files) is installed with a binary package and is used from there

This is not an easy project.

pkgsrc currently handles dependencies by including files spread over pkgsrc. The problem is that these files describe the current state of pkgsrc, not the current state of the installed packages.

For this reason and because most of the information in the files is templated, the files should be replaced by the relevant information in an easily parsable manner (not necessarily make syntax) that can be installed with the package. Here the task is to come up with a definitive list of information necessary to replace all the stuff that's currently done in files (including: dependency information, lists of headers to pass through or replace by buildlink magic, conditional dependencies, ...)

The next step is to come up with a proposal how to store this information with installed packages and how to make pkgsrc use it.

Then the coding starts to adapt the pkgsrc infrastructure to do it and show with a number of trivial and non-trivial packages that this proposal works.

It would be good to provide scripts that convert from the current state to the new one, and test it with a bulk build.

Of course it's not expected that all packages be converted to the new framework in the course of this project, but the further steps should be made clear.


  • invent a replacement for files, keeping current features
  • demonstrate your new tool as a replacement including new features
  • execute a bulk build with as many packages as possible using the new buildlink
Posted late Saturday night, November 6th, 2011 Tags:

Put config files (etc/) installed by pkgsrc into some version control system to help keeping track of changes and updating them.

The basic setup might look like this:

  • There is a repository containing the config files installed by pkgsrc, starting out empty.
  • During package installation, pkgsrc imports the package's config files into the repository onto a branch tagged with the name and version of the package (if available, on a vendor branch). (e.g.: digest-20080510)
  • After installation, there are two cases:

    1. the package was not installed before: the package's config files get installed into the live configuration directory and committed to the head of the config repository
    2. the package was installed before: a configuration update tool should display changes between the new and the previous original version as well as changes between the previous original and installed config file, for each config file the package uses, and support merging the changes made necessary by the package update into the installed config file. Commit the changes to head when the merge is done.
  • Regular automated check-ins of the entire live pkgsrc configuration should be easy to set up, but also manual check-ins of singular files so the local admin can use meaningful commit messages when they change their config, even if they are not experienced users of version control systems

The actual commands to the version control system should be hidden behind an abstraction layer, and the vcs operations should be kept simple, so that other compatibility layers can be written, and eventually the user can pick their vcs of choice (and also a vcs location of choice, in case e.g. the enterprise configuration repository is on a central subversion server).


  • choose a VCS system (BSD licensed is a nice-to-have)
  • write wrappers around it, or embed its functionality
  • demonstrate usage in upgrades


  • extend functionality into additional VCS systems

This project was done during Google Summer of Code 2018 by Keivan Motavalli Configuration files versioning in pkgsrc project. At the moment the code need to be reviewed and imported in pkgsrc.

Posted late Saturday night, November 6th, 2011 Tags:

Using kernel virtualization offered by rump it is possible to start a virtually unlimited amount of TCP/IP stacks on one host and interlink the networking stacks in an arbitrary fashion. The goal of this project is to create a visual GUI tool for creating and managing the networks of rump kernel instances. The main goal is to support testing and development activities.

The implementation should be split into a GUI frontend and a command-line network-control backend. The former can be used to design and edit the network setup, while the latter can be used to start and monitor the virtual network from e.g. automated tests.

The GUI frontend can be implemented in any language that is convenient. The backend should be implemented in a language and fashion which makes it acceptable to import into the NetBSD base system.

The suggested milestones for the project are:

  1. get familiar and comfortable with starting and stopping rump kernels and configuring virtual network topologies
  2. come up with an initial backend command language. It can be something as simple as bourne shell functions.
  3. come with an initial specification and implementation of the GUI tool.
  4. make the GUI tool be able to save and load network specifications.
  5. create a number of automated tests to evaluate the usefulness of the tool in test creation.

In case of a capable student, the project goals can be set on a more demanding level. Examples of additional features include:

  • defining network characteristics such as jitter and packet loss for links
  • real-time monitoring of network parameters during a test run
  • UI gimmicks such as zooming and node grouping
Posted late Saturday night, November 6th, 2011 Tags:

All architectures suffer from code injection issues because the only writable segment is the PLT/GOT. RELRO (RELocation Read Only) is a mitigation technique that is used during dynamic linking to prevent access to the PLT/GOT. There is partial RELRO which protects that GOT but leaves the PLT writable, and full RELRO that protects both at the expense of performing a full symbol resolution at startup time. The project is about making the necessary modifications to the dynamic loader ( to make RELRO work.

If that is completed, then we can also add the following improvement: Currently kernels with options PAX_MPROTECT can not execute dynamically linked binaries on most RISC architectures, because the PLT format defined by the ABI of these architectures uses self-modifying code. New binutils versions have introduced a different PLT format (enabled with --secureplt) for alpha and powerpc.


  • For all architectures we can improve security by implementing relro2.
  • Once this is done, we can improve security for the RISC architectures by adding support for the new PLT formats introduced in binutils 2.17 and gcc4.1 This will require changes to the dynamic loader (ld.elf_so), various assembly headers, and library files.
  • Support for both the old and new formats in the same invocation will be required.

Status: * Added support to the dynamic loader (ld.elf_so) to handle protecting the GNU relro section. * Enabled partial RELRO by default on x86.

Posted late Saturday night, November 6th, 2011 Tags:

pkgsrc is a very flexible package management system. It provides a comprehensible framework to build, test, deploy, and maintain software in its original form (with porter/packager modifications where applicable) as well as with site local modifications and customizations. All this makes pkgsrc suitable to use in diverse environments ranging from small companies up to large enterprises.

While pkgsrc already contains most elements needed to build an authentication server (or an authentication server failover pair), in order to install one, considerable knowledge about the neccessary elements is needed, plus the correct configuration, while in most cases pretty much identical, is tedious and not without pitfalls.

The goal of this project is to create a meta-package that will deploy and pre-configure an authentication server suitable for a single sign-on infrastructure.

Necessary tasks: provide missing packages, provide packages for initial configuration, package or create corresponding tools to manage user accounts, document.

The following topics should be covered:

  • PAM integration with OpenLDAP and DBMS;
  • Samba with PAM, DBMS and directory integration;
  • Kerberos setup;
  • OpenLDAP replication;
  • DBMS (PostgreSQL is a must, MySQL optional, if time permits), replication (master-master, if possible);
  • DNS server with a sane basic dynamic DNS update config using directory and database backend;
  • user account management tools (web interface, command line interface, see user(8) manual page, perhaps some scripting interface);
  • configuration examples for integration of services (web services, mail, instant messaging, PAM is a must, DBMS and directory optional).

All covered services should be documented, in particular documentation should include:

  • initial deployment instructions;
  • sample configuration for reasonably simple case;
  • instructions how to test if the software works;
  • references to full documentation and troubleshooting instructions (if any), both should be provided on-site (i.e. it should be possible to have everything available given pkgsrc snapshot and corresponding distfiles and/or packages on physical media).

In a nutshell, the goal of the project is to make it possible for a qualified system administrator to deploy basic service (without accounts) with a single pkg_add.

Posted late Saturday night, November 6th, 2011 Tags:

The goal of this project is to provide an alternative version of the NetBSD system installer with a simple, X based graphical user interface.

The installer currently uses a "homegrown" (similar to CUA) text based interface, thus being easy to use over serial console as well as on a framebuffer console.

The current installer code is partly written in plain C, but in big parts uses C fragments embedded into its own definition language, preprocessed by the "menuc" tool into plain C code and linked against libterminfo.

During this project, the "menuc" tool is modified to optionally generate a different version of the C code, which then is linked against standard X libraries. The C stub fragments sprinkled throughout the menu definitions need to be modified to be reusable for both (text and X) versions. Where needed the fragments can just call C functions, which have different implementations (selected via a new ifdef).

Since the end result should be able to run off an enhanced install CD, the selection of widgets used for the GUI is limited. Only base X libraries are available. A look & feel similar to current xdm would be a good start.

Developement can be done on an existing system, testing does not require actuall installation on real hardware.

An optional extension of the project is to modify the creation of one or more port's install CD to make use of the new xsysinst.

Milestones include: * modify the "menuc" tool to support X * keep text/serial console installing * demonstrate a GUI install * demonstrate fallback to the text installer

The candidate must have:

  • familiarity with the system installer. You should have used sysinst to install the system.
  • familiarity with C and X programming.

The following would also be useful:

  • familiarity with NetBSD.
  • familiarity with user interface programming using in-tree X widgets.


Posted late Saturday night, November 6th, 2011 Tags:

Port valgrind to NetBSD for pkgsrc, then use it to do an audit of any memory leakage.

See also and for work in progress.

Posted late Saturday night, November 6th, 2011 Tags:

NetBSD supports a number of platforms where both 32bit and 64bit execution is possible. The more well known example is the i386/AMD64 pair and the other important one is SPARC/SPARC64. On this platforms it is highly desirable to allow running all 32bit applications with a 64bit kernel. This is the purpose of the netbsd32 compatibility layer.

At the moment, the netbsd32 layer consists of a number of system call stubs and structure definitions written and maintained by hand. It is hard to ensure that the stubs and definitions are up-to-date and correct. One complication is the difference in alignment rules. On i386 uint64_t has a 32bit alignment, but on AMD64 it uses natural (64bit) alignment. This and the resulting padding introduced by the compiler can create hard to find bugs.


  • replace the manual labour with an automatic tool

    This tool should allow both verification / generation of structure definitions for use in netbsd32 code allow generation of system call stubs and conversion functions. Generated stubs should also ensure that no kernel stack data is leaked in hidden padding without having to resort to unnecessary large memset calls.

For this purpose, the Clang C parser or the libclang frontend can be used to analyse the C code.

Posted late Saturday night, November 6th, 2011 Tags: