This page is a blog mirror of sorts. It pulls in articles from blog's feed and publishes them here (with a feed, too).
The NetBSD Project is pleased to announce NetBSD 6.1.4, the fourth security/bugfix update of the NetBSD 6.1 release branch, and NetBSD 6.0.5, the fifth security/bugfix update of the NetBSD 6.0 release branch. They represent a selected subset of fixes deemed important for security or stability reasons, and if you are running a prior release of either branch, we strongly suggest that you update to one of these releases.http://www.NetBSD.org/mirrors/.
Due to a strange series of events the code changes needed to support the (slightly unusual) MIPS CPU used in the playstation2 had never been merged into gcc nor binutils mainline. Only recently this has been fixed. Unfortunately the changes have not been pulled up to the gcc 4.8.3 branch (which is available in NetBSD-current), so an external toolchain from pkgsrc is needed for the playstation2.
To install this toolchain, use a pkgsrc-current checkout and cd to cross/gcc-mips-current, then do "make install" - that is all.
Work is in progress to bring the old code up to -current. Hopefully a bootable NetBSD-current kernel will be available soon.
Work is ongoing to bring this modern toolchain to all other ports too (most of them already work, but some more testing will be done). If you want to try it, just add -V HAVE_GCC=48 to the build.sh invocation.
Note that in parallel clang is available as an alternative option for a few architectures already (i386, amd64, arm, and sparc64), but needs more testing and debugging at least on some of them (e.g. the sparc64 kernel does not boot).
For a project with diverse hardware support like NetBSD, all toolchain updates are a big pain - so a big THANK YOU! to everyone involved; in no particular order Christos Zoulas, matthew green, Nick Hudson, Tohru Nishimura, Frank Wille (and myself).
The NetBSD Project is pleased to announce:
- NetBSD 6.1.3, the third security/bugfix update of the NetBSD 6.1 release branch,
- NetBSD 6.0.4, the fourth security/bugfix update of the NetBSD 6.0 release branch,
- NetBSD 5.2.2, the second security/bugfix update of the NetBSD 5.2 release branch,
- and NetBSD 5.1.4, the fourth security/bugfix update of the NetBSD 5.1 release branch
These releases represent a selected subset of fixes deemed important for security or stability reasons. Updating to one of these versions is recommended for users of all prior releases.
For more details, please see the NetBSD 6.1.3 release notes, the NetBSD 6.0.4 release notes, the NetBSD 5.2.2 release notes, or the NetBSD 5.1.4 release notes.
Complete source and binaries for NetBSD 6.1.3, NetBSD 6.0.4, NetBSD 5.2.2 and NetBSD 5.1.4 are available for download at many sites around the world. A list of download sites providing FTP, AnonCVS, SUP, and other services may be found at http://www.NetBSD.org/mirrors/.
A cyclic trend in operating systems is moving things in and out of the kernel for better performance. Currently, the pendulum is swinging in the direction of userspace being the locus of high performance. The anykernel architecture of NetBSD ensures that the same kernel drivers work in a monolithic kernel, userspace and beyond. One of those driver stacks is networking. In this article we assume that the NetBSD networking stack is run outside of the monolithic kernel in a rump kernel and survey the open source interface layer options.
There are two sub-aspects to networking. The first facet is supporting network protocols and suites such as IPv6, IPSec and MPLS. The second facet is delivering packets to and from the protocol stack, commonly referred to as the interface layer. While the first facet for rump kernels is unchanged from the networking stack running in a monolithic NetBSD kernel, there is support for a number of interfaces not available in kernel mode.
The Data Plane Development Kit is meant to be used for high-performance, multiprocessor-aware networking. DPDK offers network access by attaching to hardware and providing a hardware-independent API for sending and receiving packets. The most common runtime environment for DPDK is Linux userspace, where a UIO userspace driver framework kernel module is used to enable access to PCI hardware. The NIC drivers themselves are provided by DPDK and run in application processes.
For high performance, DPDK uses a run-to-completion scheduling model -- the same model is used by rump kernels. This scheduling model means that NIC devices are accessed in polled mode without any interrupts on the fast path. The only interrupts that are used by DPDK are for slow-path operations such as notifications of link status change.
Like DPDK, netmap offers user processes access to NIC hardware with a high-performance userspace packet processing intent. Unlike DPDK, netmap reuses NIC drivers from the host kernel and provides memory-mapped buffer rings for accessing the device packet queues. In other words, the device drivers still remain in the host kernel, but low-level and low-overhead access to hardware is made available to userspace processes. In addition to the memory-mapping of buffers, netmap uses other performance optimization methods such as batch processing and buffer reallocation, and can easily saturate a 10GigE with minimum-size frames. Another significant difference to DPDK is that netmap allows also for a blocking mode of operation.
Netmap is coupled with a high-performance software virtual switch called VALE. It can be used to interconnect networks between virtual machines and processes such as rump kernels. The netmap API is used also by VALE, so VALE switching can be used with the rump kernel driver for netmap.
A tap device injects packets written into a device node, e.g. /dev/tap, to a tap virtual network interface. Conversely, packets received by the virtual tap network can be read from the device node. The tap network interface can be bridged with other network interfaces to provide further network access. While indirect access to network hardware via the bridge is not maximally efficient, it is not hideously slow either: a rump kernel backed by a tap device can saturate a gigabit Ethernet. The advantage of the tap device is portability, as it is widely available on Unix-type systems. Tap interfaces also virtualize nicely, and most operating systems will allow unprivileged processes to use tap interface as long as the processes have the credentials to access the respective device nodes.
The tap device was the original method for accessing with a rump kernel. In fact, the in-kernel side of the rump kernel network driver was rather short-sightedly named virt back in 2008. The virt driver and the associated hypercalls are available in the NetBSD tree. Fun fact: the tap driver is also the method for packet shovelling when running the NetBSD TCP/IP stack in the Linux kernel; the rationale is provided in a comment here and also by running wc -l.
After a fashion, using Xen hypercalls is a variant of using the TAP device: a virtualized network resource is accessed using high-level hypercalls. However, instead of accessing the network backend from a device node, Xen hypercalls are used. The Xen driver is limited to the Xen environment and is available here.
NetBSD PCI NIC drivers
The previous examples we have discussed use a high-level interface to packet I/O functions. For example, to send a packet, the rump kernel will issue a hypercall which essentially says "transmit these data", and the network backend handles the request. When using NetBSD PCI drivers, the hypercalls work at a low level, and deal with for example reading/writing the PCI configuration space and mapping the device memory space into the rump kernel. As a result, using NetBSD PCI device drivers in a rump kernel work exactly like in a regular kernel: the PCI devices are probed during rump kernel bootstrap, relevant drivers are attached, and packet shovelling works by the drivers fiddling the relevant device registers.
The hypercall interfaces and necessary kernel-side implementations are currently hosted in the repository providing Xen support for rump kernels. Strictly speaking, there is nothing specific to Xen in these bits, and they will most likely be moved out of the Xen repository once PCI device driver support for other planned platforms, such as Linux userspace, is completed. The hypercall implementations, which are Xen specific, are available here.
For testing networking, it is advantageous to have an interface which can communicate with other networking stacks on the same host without requiring elevated privileges, special kernel features or a priori setup in the form of e.g. a daemon process. These requirements are filled by shmif, which uses file-backed shared memory as a bus for Ethernet frames. Each interface attaches to a pathname, and interfaces attached to the same pathname see the the same traffic.
The shmif driver is available in the NetBSD tree.
We presented a total of six open source network backends for networking with rump kernels. These backends represent four different methodologies:
- DPDK and netmap provide high-performance network hardware access using high-level hypercalls.
- TAP and Xen hypercall drivers provide access to virtualized network resources using high-level hypercalls.
- NetBSD PCI drivers access hardware directly using register-level device access to send and receive packets.
- shmif allows for unprivileged testing of the networking stack without relying on any special kernel drivers or global resources.
Choice is a good thing here, as the optimal backend ultimately depends on the characteristics of the application.
FOSDEM 2014 will take place on 1–2 February, 2014, in Brussels, Belgium. Just like in the last years, there will be both a BSD booth and a developer's room (on Saturday).
The topics of the devroom include all BSD operating systems. Every talk is welcome, from internal hacker discussion to real-world examples and presentations about new and shiny features. The default duration for talks will be 45 minutes including discussion. Feel free to ask if you want to have a longer or a shorter slot.
If you already submitted a talk last time, please note that the procedure is slightly different.
To submit your proposal, visit
and follow the instructions to create an account and an “event”. Please select “BSD devroom” as the track. (Click on “Show all” in the top right corner to display the full form.)
Please include the following information in your submission:
- The title and subtitle of your talk (please be descriptive, as titles will be listed with ~500 from other projects)
- A short abstract of one paragraph
- A longer description if you wish to do so
- Links to related websites/blogs etc.
The deadline for submissions is December 20, 2013. The talk committee, consisting of Daniel Seuffert, Marius Nünnerich and Benny Siegert, will consider the proposals. If yours has been accepted, you will be informed by e-mail before the end of the year.
The following report is by Manuel Wiesinger:
First of all, I like to thank the NetBSD Foundation for enabling me to successfully complete this Google Summer of Code. It has been a very valuable experience for me.
My project is a defragmentation tool for FFS. I want to point out at the beginning that it is not ready for use yet.
What has been done:
Fragment analysis + reordering. When a file is smaller or equal than the file system's fragment size, it is stored as a fragment. One can think of a fragment as a block. It can happen that there are many small files that occupy a fragment. When the file systems changes over time it can happen that there are many blocks containing fewer fragments than they can hold. The optimization my tool does is to pack all these fragments into fewer blocks. This way the system may get a little more free space.
Directory optimization. When a directory gets deleted, the space for that directory and its name are appended to the previous directory. This can be imagined like a linked list. My tool reads that list and writes all entries sequentially.
Non-contiguous files analysis + reordering strategy. This is what most other operating systems call defragmentation - a reordering of blocks, so that blocks belonging to the same file or directory can be read sequentially.
What did not work as expected
Testing: I thought that it is the most productive and stable to work with unit tests. Strictly test driven development. It was not really effective to play around with rump/atf. Although I always started a new implementation step by generating a file system in a state where it can be optimized. So I wrote the scripts, took a look if they did what I intended (of course, they did not always).
I'm a bit disappointed about the amount of code. But as I said before, the hardest part is to figure out how things work. The amount it does is relatively much, I expected more lines of code to be needed to get where I am now.
Before applying for this project, I took a close look at UFS. But it was not close enough. There were many surprises. E.g. I had no idea that there are gaps in files on purpose, to exploit the rotation of hard disks.
Time management, everything took longer than I expected. Mostly because it was really hard to figure out how things work. Lacking documentation is a huge problem too.
Things I learned
A huge lesson learned in software engineering. It is always different than expected, if you do not have a lot of experience.
I feel more confident to read and patch kernel code. All my previous experiences were not so in-depth. (e.g., I worked with pintos). The (mental) barrier of kernel/system programming is gone. For example I see a chance now to take a look on ACPI, and see if I can write a patch to get suspend working on my notebook.
I got more contact with the NetBSD community, and got a nice overview how things work. The BSD community here is very mixed. There are not many NetBSD people.
CVS is better than most of my friends say.
I learned about pkgsrc, UVM, and other smaller things about NetBSD too, but that's not worth mentioning in detail.
How I intend to continue:
After a sanity break, of the whole project, there are several possibilities.
In the next days I will speak to a supervisor at my university, if I can continue the project as a project thesis (I still need to do one). It may even include online defragmentation, based on snapshots. That's my preferred option.
I definitely want to finish this project, since I spent so much time and effort. It would be a shame otherwise.
What there is to do technically
Once the defragmentation works, given enough space to move a file. I want to find a way where you can defragment it even when there is too little space. This can be achieved by simply moving blocks piece by piece, and use the files' space as 'free space'.
Online defragmentation. I already skimmed hot snapshots work. It should be possible.
Improve the tests.
It should be easy to get this compiling on older releases. Currently it compiles only on -current.
Eventually it's worth it to port the tests to atf/rump on the long run.
I will continue to work, and definitely continue to use and play with NetBSD!
It's a stupid thing that a defrag tool is worth little (nothing?) on SSDs. But since NetBSD is designed to run on (almost) any hardware, this does not bug me a lot.
Thank you NetBSD! Although it was a lot of hard work. It was a lot of fun!
The NetBSD Project is pleased to announce NetBSD 6.1.2, the second security/bugfix update of the NetBSD 6.1 release branch, and NetBSD 6.0.3, the third security/bugfix update of the NetBSD 6.0 release branch. They represent a selected subset of fixes deemed important for security or stability reasons, and if you are running a prior release of NetBSD 6.x3, you are recommended to update.http://www.NetBSD.org/mirrors/.
The NetBSD Project is pleased to announce NetBSD 5.2.1, the first security/bugfix update of the NetBSD 5.2 release branch, and NetBSD 5.1.3, the third security/bugfix update of the NetBSD 5.1 release branch. They represent a selected subset of fixes deemed important for security or stability reasons, and if you are running a release of NetBSD prior to 5.1.3, you are recommended to update to a supported NetBSD 5.x or NetBSD 6.x version.http://www.NetBSD.org/mirrors/.
Updates to NetBSD 6.x will be coming in the next few days.