Using pbulk to create a pkgsrc binary repository
pkgtools/pbulk package consists in a set of tools designed to ease mass-building of pkgsrc packages, and the creation your own pkgsrc binaries repository.
Its setup needs a bit of work, here is how to prepare and run your bulk-build box. In this article we will only consider a one-node machine.
This documentation is based on the The pkgsrc Guide.
Prerequisites
These are the prerequisites needed by pbulk:
- A pkgsrc source tree
- Possibly a src source tree, only some rare packages needs it. If you choose not to checkout src, simply create a /usr/src directory so mksandbox (see below) doesn't complain about non-existing directory.
- Possibly (not mandatory) a tool like misc/screen or misc/tmux as the full build process can take a very long time.
For example, to prepare a pkgsrc-2011Q3 bulk build:
# cd /usr
# cvs -d anoncvs@anoncvs.netbsd.org:/cvsroot co -rpkgsrc-2011Q3 pkgsrc
Avoid automatic update of pkgsrc tree (cron or such), if you're in the middle of a build, it could lead to unfortunate results.
Prepare a chroot
In order to isolate the bulk build, it is advised you run all the operations within a chroot. Running pbulk on your real environment would wipe all of your installed packages, and would modify your base system with lots of directories, users and groups you don't need.
Fortunately, a tool called mksandbox will simplify this process. mksandbox is located in the pkgtools/mksandbox package, and it is called like this:
# mksandbox [optional flags] /path/to/sandbox
For example, to create a sandbox in /home/bulk without the X11 system, run:
# mksandbox --without-x /home/bulk
This command will prepare and mount most of the needed directories, and will place a shell script on top of the sandbox filesystem called sandbox. This script is used to mount/umount your sandbox. It is a good idea to add /var/spool to the list of directories mounted as read/write in your sandbox so the email report is actually sent. Simply add:
/var/spool /var/spool rw \
to the list of directories in the sandbox script. sandbox script use is really straightforward:
# /path/to/your/sandbox/sandbox umount
Unmounts the sandbox
# /path/to/your/sandbox/sandbox mount
Mounts the sandbox
Prepare the pbulk environment
Now that our sandbox is available and mounted, we will chroot to it by executing the sandbox script without any parameter specified:
# /path/to/your/sandbox/sandbox
Create pbulk user (a Bourne style shell must be used)
# useradd -s /bin/sh pbulk
In a file, set the preferences to use when building packages (mk.conf fragment). Here is a sample mk.conf.frag file:
SKIP_LICENSE_CHECK= yes
ALLOW_VULNERABLE_PACKAGES= yes
PKG_DEVELOPER?= yes
WRKDIR= /tmp/work
# site specific changes
PKG_OPTIONS.irssi= perl inet6
PKG_OPTIONS.mplayer= oss
DSPAM_STORAGE_DRIVER= mysql
PKG_OPTIONS.dspam+= graphs
PKG_OPTIONS.dovecot= ssl ldap dovecot-sieve dovecot-managesieve
PKG_OPTIONS.nagios-nrpe=ssl tcpwrappers
X11_TYPE= modular
Deploy and configure pbulk tools
# sh /usr/pkgsrc/mk/pbulk/pbulk.sh -n -c mk.conf.frag
pbulk configuration file is /usr/pbulk/etc/pbulk.conf. You may want to review and customize some parameters like "base_url" and "report_recipients".
Also, in order to avoid hangs, it might be a good idea to add the following to the top of pbulk.conf
ulimit -t 1800 # set the limit on CPU time (in seconds)
ulimit -v 2097152 # limits process address space
Running the build
Now that everything's in place, we can fire up the build from the chroot using the following command:
# /usr/pbulk/bin/bulkbuild
It is recommended to run the build inside a tool like misc/screen or misc/tmux as it will take a lot of time.
If the build is stopped, it is possible to restart it by invoking:
# /usr/pbulk/bin/bulkbuild-restart
Hints
If you'd like to rebuild a single package, use the bulkbuild-rebuild command followed by the package name.
Project template description
The project template provides a consistent set of variables and tags to define a project proposal and/or specification.
The following parameters are supported:
- title (required)
- contact (required)
- done_by (optional): set to the name of the person that completed the project. This adds a note to the project mentioning that it has been completed and removes it from the indexes. Do not move project pages or delete them; by setting this tag, the URL will remain valid.
- mentors (optional)
- category (required): one of "filesystems", "kernel", "misc", "networking", "pkgsrc", "ports" or "userland".
- difficulty (required): one of "easy", "medium" or "hard".
- funded (optional): set to the name of the organization or individual that is willing to fund this project.
- duration (optional)
- description (required)
The following tags should be set to classify the project into different indexes:
- gsoc: Use this tag to denote a project suitable for the Google Summer of Code program. If you set this tag, the project must provide a set of mentors and its duration has to be 3 months.
This page contains the data of all the available projects proposals as a single document to permit easy searching within the page.
Note that this page also contains an Atom/RSS feed. If you are interested in being notified about changes to the projects whenever these pages get updated, you should subscribe to this feed.
- Contact: jkoshy@NetBSD.org
- Mentors: Joseph Koshy
- Duration estimate: 3 months
Execute non-native executables directly by passing them to an instruction set emulator at execve()
time.
For more information, please see: NetBSD ∘ (NetBSD ∘ Toaster) / jkoshy.net.
This is not an easy project. It touches upon:
- Linking and loading,
- Instruction set architectures,
- Binary emulation,
- POSIX compatibility,
- and others.
It also has a small kernel component involving changes to the execve()
path.
Primary milestones:
- (Proof of concept) Invoke qemu in its 'User-Mode Emulation' mode on a non-native binary.
- Contact: jkoshy
- Mentors: Joseph Koshy
- Duration estimate: 350h
Implement efficient binary package updates by patching prior releases instead of downloading large packages afresh - please see: (jkoshy.net) Efficient Package Distribution
Primary milestones:
- Define the patching protocol between the package manager client (i.e., pkgin) and the Patch server.
- Implement the 'Patch Server', defaulting to current behavior when binary patches are missing.
- Add patch protocol support to the pkgin package management client.
- On the 'Patch Server', implement a pipeline to generate binary patches whenever new package releases are added to it.
Nice to have:
- Add file-format-specific (i.e., ELF-, JPEG-, PNG- specific) binary patch generation.
- Contact: tech-kern
- Duration estimate: ~1 month
Implement a fullfs, that is, a filesystem where writes fail with disk full. This is useful for testing applications and tools that often don't react well to this situation because it rarely occurs any more.
The basic fullfs would be just a layerfs layer that you can mount (like nullfs) to get a copy of an existing subtree or volume where writes are rejected with ENOSPC. This is the first thing to get running.
However, for testing it is good to have more than that, so the complete project includes the ability to control the behavior on the fly and a fullfsctl(8) binary that can be used to adjust it.
These are some things (feel free to brainstorm others) that it would be useful for fullfsctl to be able to do:
- Turn on and off the fail state (so for example you can start up a program, let it run for a while, then have the disk appear to fill up under it)
- Arm a "doom counter" that allows the next N writes to succeed and then switches to the fail state (to test what happens if the disk fills partway through a write or save operation)
- Change what error it fails with (ENOSPC is the basic error, but at least EDQUOT and possibly other errors, such as NFS-related ones, are also interesting)
fullfs itself should be implented as a layerfs layer, not a whole filesystem.
fullfsctl should operate via one or more file-system-specific ioctls applied to the root directory of (or perhaps any file on) the fullfs volume.
There are many ways this could be extended further to provide for more general fault injection.
- Contact: tech-kern
- Duration estimate: 1 month
Assess and, if appropriate, implement RFC 5927 countermeasures against IPv6 ICMP attacks on TCP. Write ATF tests for any countermeasures implemented, as well as ATF tests for the existing IPv4 countermeasures.
This project will close PR kern/35392.
The IPv4 countermeasures were previously implemented here: https://mail-index.NetBSD.org/source-changes/2005/07/19/msg166102.html
- Contact: tech-kern
- Mentors: Taylor R Campbell
- Duration estimate: 1-2 months
Implement automatic tests with ATF for all the PAM modules under src/lib/libpam/modules.
The modules, such as pam_krb5, are not currently automatically tested, despite being security-critical, which has led to severe regressions.
- Contact: tech-pkg
- Mentors: Amitai Schleier
- Duration estimate: 1 month
Buildlink is a framework in pkgsrc that controls what headers and libraries are seen by a package's configure and build processes. We'd like to expand on this isolation by:
- Hiding
-L${PREFIX}/{include,lib}
fromCFLAGS
/LDFLAGS
, so that (for instance) undeclared buildlink dependencies will give the needed "not found" earlier in the package-development cycle - Hiding
${PREFIX}/bin
fromPATH
, so that (for instance) undeclared build- or run-time tools will give the needed "not found" earlier in the package-development cycle
Steps:
- Do bulk builds with the existing defaults on a few platforms (e.g., NetBSD, macOS, Illumos, Linux)
- Rerun the bulk builds from scratch, this time with the desired infrastructure changes
- Compare before and after to see which package builds break
- Fix them
- Enable the infrastructure changes by default for all users
- Contact: tech-pkg
- Mentors: Amitai Schleier
- Duration estimate: 1 month
bsd.pkg.mk
says: # To sanitize the environment, set PKGSRC_SETENV=${SETENV} -i.
We'd like to enable this isolation by default. Steps:
- Do bulk builds with the existing defaults on a few platforms (e.g., NetBSD, macOS, Illumos, Linux)
- Rerun the bulk builds from scratch, this time with the desired infrastructure change
- Compare before and after to see which package builds break
- Fix them
- Enable the infrastructure change by default for all users
- Contact: tech-pkg
To create large sets of binary packages, pkgsrc uses the pbulk tool.
pbulk is designed to build packages in bulk in a chroot sandbox (which may be based on an older NetBSD release, a 32-bit release on a 64-bit machine, or similar).
However, managing multiple pbulk installations on a single machine can quickly become unwieldy.
It would be nice to have a web frontend for managing pbulk, restarting builds, creating new build sandboxes based on different NetBSD versions and package sets, etc.
For an example of a typical high-performance pbulk setup used on "large" hardware, see Nia's pbulk scripts. Notably, these include chroot management and can be used to build packages for multiple NetBSD (and pkgsrc) versions simultaneously by using different "base directories".
- A management tool should support parallel pbulk across many chroots, but should also easily allow simultaneous builds for different NetBSD versions on the same machine.
- It should be possible to schedule builds for later, somewhat like Jenkins.
- There should be some kind of access management system.
- pbulk creates status directories, it would be useful to present as much data from them as possible, as well as information on system resource usage (e.g. to find any stuck processes and monitor hardware health).
- It should be possible for users to "kick" the builds (to quickly restart them, kill "stuck" processes, and so on).
- It should be possible to schedule builds for upload after manual review of the reports, resulting in an rsync from a local staging directory to ftp.netbsd.org.
- It should be usable "over a morning coffee".
- For bonus points, it should be easily installable from pkgsrc, ideally with an RC script.
- For bonus points, integrate well with the NetBSD base system tools (e.g. bozohttpd).
- Contact: tech-install
Currently NetBSD can be booted via UEFI firmware, but only offers the default boot loader setup so multi-boot environments are hard to create. This also causes cryptic displays in the firmware boot order menu or boot select menu, like "UEFI OS", instead of "NetBSD 10.0".
The UEFI spec offers support to configure load options, which include a path to the bootloader and a description of the operating system, see the UEFI spec. This project is to implement setting up proper load option variables at least on x86 machines booting via UEFI.
Part of the project is to find the best place to set this options up. Some integrations with sysinst might be needed, maybe sysinst is the right place to set this variables. If not, sysinst may simply be changed to use a different sub directory on the ESP for the NetBSD bootloader and the variables setup might happen elsewhere.
Currently the kernel interface to access the SetVariable() and other EFI runtime callbacks exists, but there is no userland tool to operate it.
It is not clear what the EFI path set in the variable should be, and mapping NetBSD disks/partitions to EFI path notation is not trivial.
- Contact: tech-pkg
- Mentors: unknown
- Duration estimate: unknown
In recent years, packages whose builds download things on the fly have become an increasing problem. Such downloads violate both pkgsrc principles/rules and standard best practices: at best, they bypass integrity checks on the downloaded material, and at worst they may check out arbitrary recent changes from GitHub or other hosting sites, which in addition to being a supply-chain security risk also makes reliable repeatable builds impossible.
It has simultaneously grown more difficult to find and work around these issues by hand; these techniques have been growing increasingly popular among those who should perhaps know better, and meanwhile upstream build automation has been steadily growing more elaborate and more opaque. Consequently, we would like a way to automatically detect violations of the rules.
Currently, pbulk runs in two steps: first it scans the tree to find out what it needs to build, and once this is done it starts building. Builds progress in the usual pkgsrc way: the fetch phase comes first and downloads anything needed, then the rest of the build continues. This interleaves (expected) downloads and build operations, which on the one hand tends to give the best build throughput but on the other makes it difficult to apply network access restrictions.
The goal of this project is to set up infrastructure such that bulk builds can run without network access during the build phase. (Or, more specifically, in pkgsrc terms, everything other than the fetch phase.) Then attempts to download things on the fly will fail.
There are two ways to accomplish this and we will probably want both of them. One is to add a separate fetch step to pbulk, so it and the build step can run with different global network configuration. The other is to provide mechanisms so that expected downloads can proceed and unwanted ones cannot.
Separate fetch step
Since bulk builds are generally done on dedicated or mostly-dedicated systems (whether real or virtual) system-wide changes to the network configuration to prevent downloads during the build step will be in most setups acceptable, and also sufficient to accomplish the desired goals.
There are also two possible ways to implement a separate fetch step: as part of pbulk's own operational flow or as a separate external invocation of the pbulk components. The advantage of the former is that it requires less operator intervention, while the advantage of the latter is that it doesn't require teaching pbulk to manipulate the network state on its own. (In general it would need to turn external access off before the build step, and then restore it after in order to be able to upload the build results.) Since there are many possible ways one might manipulate the network state and the details vary between operating systems, and in some cases the operations might require privileges the pbulk process doesn't have, making pbulk do it adds considerable complication; on the other hand, setting up pbulk is already notoriously complicated and requiring additional user-written scripts to manipulate partial build states and adjust the network is a significant drawback.
Another consideration is that to avoid garbaging the failure reports, any download failures need to be recorded with pbulk during the download step, and the packages affeected marked as failed, so that those failures end up in the output results. Otherwise the build step will retry the fetch in an environment without a network, and that will then fail but nobody will be able to see why. For this reason it isn't sufficient to just, for example, run "make -k fetch" from the top level of pkgsrc before building.
Also note that rather than just iterating the package list and running "make fetch" from inside pbulk, it might be desirable to use "make fetch-list" or similar and then combine the results and feed them to a separate download tool. The simple approach can't readily adapt to available network bandwidth: depending on the build host's effective bandwidth from various download sites it might either flood the network or be horrendously slow, and no single fixed parallelism setting can avoid this. Trying to teach pbulk itself to do download load balancing is clearly the wrong idea. However, this is considerably more involved, if only because integrating failure results into the pbulk state is considerably more complicated.
Blocking unauthorized downloads
The other approach is to arrange things so that unwanted downloads will fail. There are a number of possible ways to arrange this on the basic assumption that the system has no network access to the outside world by default. (For example, it might have no default route, or it might have a firewall config that blocks outgoing connections.) Then some additional mechanism is introduced into the pkgsrc fetch stage so that authorized downloads can proceed. One mechanism is to set up a local HTTP proxy and only make the proxy config available to the fetch stage. Another, possibly preferable if the build is happening on a cluster of VMs or chroots, is to ssh to another local machine to download there; that allows mounting the distfiles directory read-only on the build hosts. There are probably others. Part of the goal of this project should be to select one or a small number of reasonable mechanisms and provide the necessary support in pbulk so each can be enabled in a relatively turn-key fashion. We want it to be easy to configure this restriction and ideally in the long term we'd like it to be able to default to "on".
Note that it will almost certainly be necessary to strengthen the pkgsrc infrastructure to support this. For example, conditionally passing around HTTP proxy config depending on the pkgsrc phase will require changes to pkgsrc itself. Also, while one can redirect fetch to a different tool via the FETCH_USING variable, as things stand this runs the risk of breaking packages that need to set it itself. There was at one point some talk about improving this but it apparently never went anywhere.
An additional possibility along these lines is to leverage OS-specific security frameworks to prohibit unwanted downloading. This has the advantage of not needing to disable the network in general, so it can be engaged even for ordinary builds and on non-dedicated hosts. However, because security frameworks and their state of useability vary, it isn't a general solution. In particular on NetBSD at the moment we do not have anything that can do this. (It might be possible to ues kauth for it, but if so the necessary policy modules do not currently exist.) Consequently this option is in some ways a separate project. Note if attempting it that simplistic solutions (e.g. blocking all attempts to create sockets) will probably not work adequately.
Other considerations
Note when considering this project:
- Working on pbulk at all is not entirely trivial.
- It's necessary to coordinate with Joerg and other stakeholders so that the changes can eventually be merged.
- Any scheme will probably break at least a few innocent packages that will then probably need somebody to patch them.
Overall, one does hope that any package that attempts to download things and finds it can't will then fail rather than silently doing something different that we might or might not be able to detect. It is possible in the long run that we'll want to use a security framework that can log download attempts and provide an audit trail; however, first steps first.
- Contact: tech-userland
Java software like e.g. Eclipse is using SWT, the "Standard Widget Toolkit".
It would be good to have a NetBSD port. Since this is running on Linux already, it's probably mostly adapting that to NetBSD.
NetBSD includes various simple, command-line audio tools by default, such as audioplay(1), audiorecord(1), mixerctl(1), aiomixer(1), audiocfg(1)...
These tools are useful because they provide almost everything a user needs to test basic functionality of their audio hardware. They are critically important for basic diagnostics.
It would be nice to have a tool to easily visualize audio input using a simple Curses interface. Some ideas for its possible functionality:
- Display basic live-updating frequency graph using bars
- Display channels separately
- 'Echo' option (play back audio as it is input)
- pad(4) support (NetBSD has support for 'virtual' audio devices.
This is useful because you can record the output of an application
by having it output to the audio device that opening
/dev/pad
creates. This can also 'echo' by outputting the data read from the pad device.)
You need NetBSD installed on physical hardware (older laptops work well and are cheaply available) and a microphone for this project. Applicants should be familiar with the C programming language.
pkgsrc is NetBSD's native package building system. It's also used on other platforms, such as illumos. It includes numerous graphical environments, including Xfce, MATE, and LXQt, but support for Enlightenment has since bitrotted and been largely removed. Support for its related fork Moksha is missing entirely.
Enlightenment is partiuclarly interesting for NetBSD because it's lightweight, BSD licensed, and suitable for mobile applications. We're not sure about the benefits of Moksha over Enlightenment proper, but it's worth investigating.
Since Enlightenment is written in C, the applicant should ideally have a basic understanding of C and Unix system APIs. In order for the port not to bit-rot in the future, it should be done well, with patches integrated upstream where possible. They should have a laptop with NetBSD installed (older laptops are likely more representative of typical NetBSD uses and can be picked up cheap from local auctions sites).
Integrating Enlightenment into pkgsrc will require a knowledge of build systems and make (pkgsrc in particuar is built on top of BSD make).
Milestones:
- A basic port enables basic Enlightenment installation on NetBSD when installed from pkgsrc.
- A more advanced and ideal port has tight integration with NetBSD system APIs, supporting features like native audio mixing and reading from sensors.
- For extra brownie points, the pkgsrc package should work on illumos too.
A core component of NetBSD is the 'xsrc' repository, which contains a 'classic' distribution of X11, all related programs, and libraries, as found on Unix systems from times of yore.
xsrc
uses the NetBSD build system and only BSD make to build, which
means it builds extremely quickly, with minimal dependencies, and is
easy to cross-compile. It currently includes an implementation
of the OpenGL graphics API (Mesa), but not an implementation of
the next-generation Vulkan graphics API, or OpenCL, the
GPU-accelerated compute API, which can also be obtained from Mesa.
Most of modern X.Org is built with Meson and Python, so some level of translation is required to integrate new components.
This project involves making modifications to the Mesa Vulkan and OpenCL libraries in order to allow them to work on NetBSD (this part requires basic knowledge of the C programming language and Unix APIs), ideally submitting them upstream, until Vulkan and OpenCL support can be built on NetBSD, and then integrating the relevant components into the NetBSD build system using only BSD Make.
The candidate should ideally have some knowledge of the C programming language and build systems.
pkgsrc is NetBSD's native package building system It includes numerous graphical environments, including Xfce, MATE, and LXQt, but many have limited support for native NetBSD system APIs, e.g. support for reading battery levels, and audio volume.
We really would like better desktop environment integeration, and this requires some work on the upstream projects in C and in some cases C++.
An applicant should have basic familiarity with build systems, make, and C. They should be good at carefully reading documentation, as much of this stuff is documented in manual pages like audio(4) and envsys(4). They should have a laptop with NetBSD installed (older laptops are likely more representative of typical NetBSD uses and can be picked up cheap from local auctions sites).
They should be able to investigate the current level of support in various third-party projects and identify priority targets where native code for NetBSD can be written.
Nia is very experienced in writing native code for NetBSD audio and sensors and would be happy to answer questions.
As the project continues, we might even be able to start porting more applications and applets.
The NetBSD Wi-Fi stack is being reworked to support newer protocols, higher speeds, and fine-grained locking using code from FreeBSD. As part of this work, all existing NetBSD Wi-Fi drivers need to be reworked to the new Wi-Fi code base.
Successful completion of this project requires you to have access to hardware that is already supported by NetBSD but not yet converted. See the ?Driver state matrix for a list of devices to convert. Many older devices can be found cheap on sites like eBay.
When applying for this project, please note which driver(s) you want to convert.
- Contact: tech-userlevel
- Duration estimate: unknown
Currently resize_ffs(8) does not work with FFSv2 file systems.
This is a significant problem, since we currently rely on resize_ffs to provide live images for ARM, and FFSv1 is lacking in various ways (e.g. 2038-limited timestamps and maximum disk sizes of 1TB).
- Contact: Martin Husemann, tech-pkg
- Duration estimate: 1 month
We are currently not distributing official TNF binary packages with embedded signature. The pkgsrc infrastructure seems to be mostly there, but there are loose ends and this makes NetBSD fall back behind other pkgsrc users where everything needed comes with the bootstrap kit.
There have been various related experiments and discussions in the past, and the responsible persons are willing to change it now (that is: ideally have all binary pkgs for NetBSD 10 signed and verified already).
This project is about fixing the loose ends.
Intended user workflow
- the user installs a new system
- at the end of the sysinst installation the config page offers a binary pkgs setup
- the user selects a repository (with a working default) and sysinst triggers all necessary configuration and installation actions (this may involve downloads and running fixup scripts, but may not require manual intervention)
- after a reboot of the new machine, binary pkgs can be directly added and will be automatically verified (e.g.: "pkg_add firefox" or "pkg_add xfce4" will work)
Implementation details
The following drafts a possible pkgsrc/pkgbuilders/releng workflow and assumes x509 signing. This is just to make this project description clearer, the project does not require a x509 based solution.
Operational workflow for pkg creation
- Thomas Klausner (wiz) is the keeper of the pkgsrc master CA key. He creates intermediate CA keys for every developer in charge of some pkgbuilding machines, signs them with the master CA key and distributes them.
- The public part of the master CA certificate becomes part of the NetBSD release and is available as a trust anchor.
- Every developer in charge of some pkgbuild machines creates a signing key (without passphrase) from their intermediate CA key and installs it on the individual pkg build machine
Main point of the whole process is that NetBSD and pkgsrc have different release cycles, and pkg building machines come and go. We do not want a fixed set of allowed machine signing keys distributed with a (long living) NetBSD release, but we do not want to just trust whatever the binary pkg repository offers, so there needs to be proper automatic validation of all keys used for a repository against some trust anchor provided with the base system. With the current size of the project it might be manageable to have all finally used signing keys signed directly by the pkgsrc master key, but a design that allows an interim step where individual signing keys could be created by the developers in charge of the machines would be preferable.
Deliverables for this project
all required changes (if any) for the pkgtools and pkgsrc makefiles, or any new tools/scripts (either as a set of patches or commited).
a description of the overall workflow, e.g. as a wiki page or as part of the web site.
concrete instructions for the various parties involved in the deployment:
- pkgsrc master key/cert handling (Thomas)
- releng: how to make the trust anchor part of the release and what needs to be configured/done by sysinst
- globally
- post pkg repository selections
- pkg build administrators: how to create signing keys and how to configure the pkgbuild machines
And of course all this needs to be tested upfront.
Bonus
If this project succeeds and does not use x509, propose removal of the bit rotted and not fully functional x509 support from pkg tools and the pkgsrc infrastructure.
Setup tried so far and how it fails
Thomas (wiz@) provided the certificate for the TNF CA, intended to be used to verify all signed binary pkgs. When everything works, this key should be part of the base system.
Thomas also created a cert+key for the test setup, signed by the TNF CA key, intended to be used to sign binary pkgs on a single pkg build setup.
The instructions for these two steps are in pkgsrc/pkgtools/pkg_install/files/x509/signing.txt - a script and a config file are in the same directory.
On the build machine, the setup is simple:
- store the keys for example in
/etc/pkg-certs
. The names used below are00.pem
for the TNF CA cert andpkgkey_cert
for the individual builder certificate andpkgkey_key.pem
for the corresponding key (which needs to have no passphrase) Add to
/etc/mk.conf
(or the equivalent in the bulk build tree)# signed binary pkgs, see # https://mail-index.netbsd.org/pkgsrc-users/2013/08/30/msg018511.html SIGN_PACKAGES=x509 X509_KEY=/etc/pkg-certs/pkgkey_key.pem X509_CERTIFICATE=/etc/pkg-certs/pkgkey_cert.pem
Add to
/etc/pkg_install.conf
VERIFIED_INSTALLATIONS=always CERTIFICATE_ANCHOR_PKGS=/etc/pkg-certs/pkgkey_cert.pem CERTIFICATE_CHAIN=/etc/pkg-certs/00.pem
Then create a single pkg, like:
cd /usr/pkgsrc/pkgtools/digest
make package
make install
At the end of make package
you should see successful signing of the binary pkg.
But the make install
will fail to verify the certificate.
Note: a key point of the whole setup is to avoid having to add the content of
pkgkey_cert.pem
to 00.pem
(or similar). We want to avoid having to distribute
many (changing) keys of build machines with the base system.
An alternative solution would make the key distribution part of the initial setup (e.g. download from a fixed relative path when selecting a pkg repository URL), but no off-the-shelf tools for that currently exist.
- Contact: Leonardo Taccari, tech-pkg
- Duration estimate: 175h
The current UI of pkgsrc MESSAGE as a couple of drawbacks:
- When installing a lot of packages via
pkg_add
orpkgin
it is often get lost - When updating packages via
pkg_add
orpkgin
- also if theMESSAGE
is not changed - it is printed anyway
For possible inspirations please look at OpenBSD ports' pkg-readmes and/or other package systems.
- Contact: nia netbsd-docs
NetBSD has large amounts of documentation in XML DocBook: the Guide, the pkgsrc Guide, large parts of the website.
asciidoc is a much nicer, friendlier format than XML. It preserves the same advantages as DocBook: easy conversion to PDF and other formats for book distribution, and rich semantics.
- https://en.wikipedia.org/wiki/AsciiDoc
- https://asciidoctor.org/docs/asciidoc-writers-guide/
- https://asciidoc.org/
There is a tool for converting DocBook to asciidoc, but it is likely not perfect, and output will require manual adjustment:
asciidoc itself can also output DocBook. We might leverage this so we can convert the website step-by-step, and keep the existing templates.
This project will require careful adjustment of build systems in the htdocs repository and those used by the pkgsrc guide.
A working proposal should demonstrate one page (of the website, pkgsrc guide, or NetBSD guide), converted to asciidoc and hooked up to the build system, integrated, with reasonable looking output.
- Contact: tech-kern
- Funded by: coypu@sdf.org ($400 expires 1/July/2021)
The current xhci driver requires contiguous allocations, and with higher uptime, NetBSD's kernel memory becomes more and more fragmented.
Eventually, large contiguous allocations aren't possible, resulting in random USB driver failures.
This feature will improve user experience of NetBSD.
When a system needs more memory but has free disk space it could auto create swap files and then delete them later.
The ideal solution would be configurable for:
- thresholds for creation
- min/max (don't fill up the disk entirely)
- encryption settings
The "listener" for the file creation should avoid thrashing, have timeouts, and handle disk space usage sanely.
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
OpenLDAP already has a SASL back-end for CYRUS-SASL.
In NetBSD, we have our own SASL-C library which has similar functionality and can be used in OpenLDAP instead of CYRUS.
Base postfix already does this.
There is a cyrus.c file where all the work is done.
We can make a saslc.c one that uses our library.
This will allow different authentication schemes to be used for the client programs (so we will be able to run ldapsearch against an Active Directory server using GSSAPI.
- Contact: tech-kern
- Mentors: Christoph Badura
- Duration estimate: 350h
NetBSD has an extensive test suite that tests native kernel and userland code.
Mounting the root file system is one of the last steps the kernel does during boot before starting the first process (init(8)).
Root file system selection is not covered by the current test suite.
How to find the root file system is specfied in the kernel configuration file. E.g.:
config netbsd root on ? type ?
config netbsd root on sd0a type ?
The first is a wildcard specification which causes the kernel to look for the root file system on the device that the kernel was booted from. The second form specifies the device and partition that contains the root file system. Other forms are also possible.
The selection process is a complex interaction between various global variables that get initialized from the kernel configuration file and by machine specific code that processes the information passed by the bootloader about where the kernel was loaded from.
This selection process is performed mostly by a function named setroot
in the file sys/kern/kern_subr.c
.
The project could be completed in a number of steps:
- Document the existing use cases and
config ...
syntax. - Document the processing steps and functions called by
setroot
. - Document how the various global variables interact.
- Write unit tests using rumpservers for the ATF framework for the documented use cases.
The project would initially be focussed on x86 (amd64 and i386).
- Contact: tech-toolchain
- Duration estimate: 175h
clang-format is a tool to format source code according to a set of rules and heuristics. Like most tools, it is not perfect nor covers every single case, but it is good enough to be helpful.
clang-format can be used for several purposes:
- Quickly reformat a block of code to the NetBSD (KNF) style.
- Spot style mistakes, typos and possible improvements in files.
- Help to follow the coding style rules.
Milestones:
- Create configuration file .clang-format that approximate the NetBSD coding style
- Patch LibFormat to handle missing coding style rules.
- Integrate .clang-format with the NetBSD distribution.
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
racoon(8) is the current IKEv1 implementation used in NetBSD. The racoon code is old and crufty and full of potential security issues. We would like to replace it. There are other implementations available, such as StrongSwan, openiked/isakmpd, racoon2.
This project has two stages:
Evaluate all 3 (or more) solutions, describe and document their pros and cons, and then settle into one of them.
Port it to NetBSD to replace racoon.
I have started working on that for racoon2 on https://github.com/zoulasc/racoon2/ (see the TODO file), and also have a build glue for NetBSD for it https://github.com/zoulasc/racoon2-glue/ and it works. I've also gotten openiked to compile (but not work).
- Contact: tech-net
- Mentors: Jason R. Thorpe
- Duration estimate: 175h
the urtwn and rtwn have a lot of duplicate code.
Merging them will improve both.
This project is on hold due to the conversion project needing to be completed first.
- Contact: tech-kern
- Duration estimate: 350h
NetBSD has the capability to run binaries compiled for Linux under compat_linux. This is a thin in-kernel translation layer that implements the same ABI as the Linux kernel, translating Linux system calls to NetBSD ones.
Not all Linux syscalls are implemented. This means some programs cannot run.
This project is about identifying critical missing syscalls and adding support for them.
In the course of this project, you should find at least one Linux binary that does not yet run on NetBSD using compat_linux to use as a test case (your mentor may have suggestions), trace the program to find the missing features it requires, make note of them, and begin implementing them in NetBSD's Linux compatibility layer.
- Contact: port-amd64
- Duration estimate: 350h
VMWare provides an emulator that could use graphical acceleration, but doesn't on NetBSD.
A DRM driver exists for linux which could be adapted, like other DRM drivers that were ported.
- Contact: port-arm
- Duration estimate: 3-6 months
Android is an extremely popular platform, with good software support.
NetBSD has some COMPAT_LINUX
support, and it might be possible to leverage this to run Android applications.
This is only done for GNU/Linux platforms right now (SUSE / Debian).
We need to start with Android x86, as COMPAT_LINUX
for x86 already exists and is known to work.
As this is a difficult project, the project will need adjustments with time.
- Create an anbox chroot on linux/x86, experiment with running it with NetBSD.
- Experiment with running simplest Android program
- Implement missing syscall emulation as needed
- ??? (gap for difficulties we will find from this)
- Package anbox-chroot in pkgsrc
Resources:
- Anbox makes it possible to run things on regular linux, and is worth exploring.
- This page details changes done on Android
- The source code of Android is open.
- Contact: tech-net
- Duration estimate: 175h
Access to some hardware registers and other things can only be done by one CPU at a time.
An easy way to do this is to make the entire network stack runs with a single lock held, so operations only take place on one core.
This is inefficient, if you ever want to use more than one core, for faster performing cards.
Adapting old drivers to be able to run with the rest of the network stack not having this lock will improve NetBSD networking.
A large number of drivers must be adapted, and some of them can be emulated from virtual machines too, some examples:
- Realtek RTL8139 Gigabit Ethernet re(4) (supported by QEMU)
- AMD PCnet pcn(4) (supported by QEMU and VMware)
- Novell NE1000 ne(4) (supported by QEMU)
- Atheros/Killer Gigabit Ethernet alc(4)
- Attansic Gigabit Ethernet ale(4)
- Broadcom NetXtreme bnx(4)
You may find others listed in pci(4). It is possible you have a computing device with a device for which the driver hasn't been converted yet.
The file src/doc/TODO.smpnet
in the NetBSD source tree contains a list of fully
converted drivers that you may use an an example, as well as some general guidelines.
When applying for this project, please note which driver you would like to work on.
Raspberry Pi is a very popular ARM board.
It has a modern graphical driver, VC4.
NetBSD already supports several DRM drivers (from Linux 4.4), living in sys/external/bsd/drm2. Adapting this one will make Raspberry Pi work better out of the box.
While this project requires hardware, we can help with supplying a Raspberry Pi if needed.
Milestones for this project:
- VC4 driver builds as part of netbsd source tree (no hardware access needed)
- Adjust device tree configuration so VC4 driver is used
- Iron out bugs that appear from running it
- Contact: tech-net
- Mentors: Frédéric Fauberteau
- Duration estimate: 175h
The iscsictl(1) program manages the iSCSI instances on the local computer. It communicates with the iscsid(8) daemon to send queries using iSCSI protocol.
Possible enhancements:
- Review of iscsictl(1) manpage. For instance, the command
add_target
has no description, [target-opts] could be refered to "Target Options". - Add a mode to iscsictl(1) program to log sessions at boot. It could be a
batch
command (the name could be discussed) that read a /etc/iscsi.conf file. Some parts of the iscsictl(1) from FreeBSD could be ported. - Implement the
find_isns_servers
.
The iscsi-target(8) server allows to setup iSCSI targets on a NetBSD host and to present block storage to the network. It can be used to test the iscsictl(1) implementation.
Improvements that can be done to NPF with reference to WIP code/descriptions:
Import thmap, needed for newer NPF- WIP dynamic NAT address and NETMAP
- Use of "any"
map $ext_if dynamic any -> $ext_v4 pass family inet4 from $int_net
.
needs a few syntactic fixes/wrappers (e.g. loading can just handle "any" here, since it's actually correct, just merely not supported by the syntax; you can replace it with 0.0.0.0, though) - traffic redirection, 1 2 I think it just needs IPv6 handling and testing
- Contact: christos, tech-userlevel
- Duration estimate: 1 month
The Citrus local code in NetBSD's libc stores the locale data and code in shared object files. Dynamic linking is used to load the code by name using dlopen(3). The static libc build (libc.a) can't use dlopen(3) to open locales and it only supports the C locale which is hard-coded.
This project is about adding make(1) rules to compile all the locales in libc and code to select each one by name. The same technique is used in libpam (libpam.a).
- Contact: tech-userlevel
- Mentors: Stephen Borrill
- Duration estimate: 350h
NetBSD has an extensive test suite that tests native kernel and userland code. NetBSD can run Linux binaries under emulation (notably on x86, but other platforms such as ARM have some support too). The Linux emulation is not covered by the test suite. It should be possible to run an appropriate subset of the tests when compiled as Linux binaries.
The project could be completed in a number of steps:
- Determine tests that make sense to run under Linux emulation (e.g. syscalls)
- Compile tests on Linux and then run on NetBSD
- Add new/altered tests for Linux-specific APIs or features
- Build cross-compilation environment to build Linux binaries on NetBSD, to make the test-suite self-hosting
- Fix Linux emulation for tests that fail
- Use tests to add Linux emulation for syscalls missing (e.g.timer_*)
It may also be instructive to look at the Linux Test Project.
The project would initially be focussed on x86 (amd64 and i386).
Debian's .deb packaging format is supported by mature and user-friendly packaging tools.
It's also the native packaging format in several systems, and would improve user experience on those systems.
It would be nice to generate pkgsrc packages to this format.
Prior work exists for generating packages in other formats.
Milestones
- Lookup .deb format documentation and experiment with tooling
- Investigate differences between pkgsrc's pkg and deb format
- Import necessary tools to generate .deb packages
- Build many packages and look for bugs
- Contact: Alistair G. Crooks, tech-crypto
- Mentors: Alistair G. Crooks
- Duration estimate: 350h
- Adapt existing ed25519 and salsa20 implementations to netpgp, netpgpverify
- Maintain compatibility and interoperability with gpg2's usage
- Maintain compatibility with openssh's keys
- Extend tests to cover new algorithms
Extended goals:
- provide a standalone netpgp signing utility, to mirror the netpgpverify verification utility
- Contact: tech-userlevel
- Mentors: Kamil Rytarowski
- Duration estimate: 350h
Integrate the LLVM Scudo with the basesystem framework. Build and execute base programs against Scudo.
Milestones:
- Ensure completeness of the toolchain in the basesystem.
- Add a new option for building the basesystem utilities with Scudo.
- Finish the integration and report bugs.
- Research Scudo for pkgsrc.
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
The NetBSD sourcecode is verified by a static analyzers like Coverity. Attempt to automate the process, report and if possible fix bugs.
Milestones:
- Consult and research the available tools.
- Integrate the tools for the purposes of the NetBSD project.
- Scan the sources, report bugs, if possible fix the problems, or file bug reports
- Contact: tech-userlevel
- Mentors: Kamil Rytarowski
- Duration estimate: 350h
There is an initial functional support for syzkaller for NetBSD (as guest). Resume the porting effort, execute and report kernel bugs.
Milestones:
- Ensure completeness of the current support.
- Execute the fuzzer and gather reports, narrow down problems, translate to C reproducers.
- Add missing features, fix bugs in the NetBSD support.
- Contact: tech-userlevel
- Mentors: Kamil Rytarowski
- Duration estimate: 350h
Integrate the LLVM libFuzzer with the basesystem framework. Build and execute base programs against libFuzzer.
Milestones:
- Ensure completeness of the toolchain in the basesystem.
- Add a new option for building the basesystem utilities with libFuzzer.
- Finish the integration and report bugs.
- Contact: tech-pkg
- Mentors: Kamil Rytarowski
- Duration estimate: 350h
Add support in the pkgsrc framework for building packages with sanitizers.
Expected sanitizer options:
- Address (ASan),
- Memory (MSan),
- MemoryWithOrigin (MSan with tracking the origin)
- Undefined (UBSan),
- Thread (TSan),
- Address;Undefined (ASan & UBSan)
- "" (empty string) - the default option
Milestones:
- Ensure the availability of the toolchain and prebuilt userland with the sanitizers.
- Add new option in pkgsrc to build the packages with a each sanitizer.
- Build the packages and report problems and bugs.
- Contact: tech-kern
- Mentors: Christos Zoulas
- Duration estimate: 350h
ALTQ (ALTernate Queueing) is an optional network packet scheduler for BSD systems. It provides various queueing disciplines and other quality of service (QoS) related components required to control resource usage.
It is currently integrated in pf(4) .
Unfortunately it was written a long time ago and it suffers from a lot of code duplication, dangerous code practices and can use improvements both in the API and implementation. After these problems have been addressed it should be integrated with npf(4) .
- Contact: tech-userlevel
Subtle changes in NetBSD may carry a significant performance penalty. Having a realistic performance test for various areas will allow us to find offending commits. Ideally, it would be possible to run the same tests on other operating systems so we can identify points for improvement.
It would be good to test for specific cases as well, such as:
- Network operation with a packet filter in use
- Typical disk workload
- Performance of a multi-threaded program
Insert good performance testsuite examples here
It would be nice to easily test commits between particular dates. Automated runs could be implemented with the help of misc/py-anita.
- Contact: tech-kern
LFS is currently carrying around both support for the historical quota format ("quota1") and the recent newer 64-bit in-volume quota format ("quota2") from FFS, but neither is actually connected up or works. In fact, neither has ever worked -- there is no reason to worry about compatibility.
Define a new on-disk quota format that shares some of the properties of both the FFS formats:
it should be 64-bit (like quota2);
in LFS there is no reason to have a tree-within-a-file like quota2; just files indexed directly by uid/gid like quota1 is fine;
the quota files should be hidden away like in quota2;
there should be explicit support for deleting quota entries like quota2;
some of the additional bits in quota2, like default value handling, should be retained.
Implement the quota system by folding together the existing copies of the quota1 and quota2 code and writing as little new code as possible. Note that while the "ulfs" code should have more or less complete quota support and all the necessary hooks in place, you'll have to add quota hooks to the native lfs code.
You will also need to figure out (before you start coding) what if any special steps need to be taken to make sure that the quota files on disk end up consistent as segments are written out. This may require additional logic like the existing DIROP logic, which would likely be a pain.
Add an option to newfs_lfs to enable quotas. Then add code to fsck_lfs to crosscheck and correct the usage amounts. Also add logic in tunefs_lfs to turn quotas on and off. (It's ok to require a fsck afterwards when turning quotas on to compute the current usage amounts.)
(Note that as of this writing no tunefs_lfs program exists -- that will have to be corrected too.)
- Contact: tech-kern
- Duration estimate: 1-2 months
The fallocate/posix_fallocate system call, and the file-system-level VOP_FALLOCATE operation backing it, allow preallocating underlying storage for files before writing into them.
This is useful for improving the physical layout of files that are written in arbitrary order, such as files downloaded by bittorrent; it also allows failing up front if disk space runs out while saving a file.
This functionality is not currently implemented for FFS; implement it. (For regular files.) There is not much to this besides running the existing block allocation logic in a loop. (Note though that as the loop might take a while for large allocations, it's important to check for signals periodically to allow the operation to be interrupted.)
- Contact: tech-kern
- Duration estimate: 2 months
Apply TCP-like flow control to processes accessing to the filesystem, particularly writers. That is: put them to sleep when there is "congestion", to avoid generating enormous backlogs and provide more fair allocation of disk bandwidth.
This is a nontrivial undertaking as the I/O path wasn't intended to support this type of throttling. Also, the throttle should be underneath all caching (there is nothing to be gained by throttling cached accesses) but by that point attributing I/O actions to specific processes is awkward.
It might be worthwhile to develop a scheme to tag buffers with the upper-level I/O requests their pending changes came from, or something of the sort. That would likely help with readahead processing and other things as well as with flow control.
- Contact: tech-kern
Heavily used file systems tend to spread the blocks all over the disk, especially if free space is scarce. The traditional fix for this is to use dump, newfs, and restore to rebuild a volume; however, this is not acceptable for current disk sizes.
The resize_ffs code already contains logic for relocating blocks, as it needs to be able to do that to shrink a volume; it is likely that it would serve nicely as a starting point for a defragmenting tool.
Note that safety (not scrambling the volume if the system crashes while defragging) is somewhat tricky and critically important.
- Contact: tech-kern
- Duration estimate: 1-2 months
NetBSD has a fixed, kernel-wide upper limit on transfer size, called MAXPHYS, which is currently set to 64k on most ports. This is too small to perform well on modern IDE and SCSI hardware; on the other hand some devices can't do more than 64k, and in some cases are even limited to less (the Xen virtual block device for example). Software RAID will also cause requests to be split in multiple smaller requests.
This limit should be per-IO-path rather than global and should be discovered/probed as those paths are created, based on driver capability and driver properties.
Much of the work has already been done and has been committed on the tls-maxphys branch. What's needed at this point is mostly testing and probably some debugging until it's ready to merge.
This project originaly suggested instead to make the buffer queue management logic (which currently only sorts the queue, aka disksort) capable of splitting too-large buffers or aggregating small contiguous buffers in order to conform to device-level requirements.
Once the MAXPHYS changes are finalized and committed, this project may be simply outdated. However, it may also be worthwhile to pursue this idea as well, particularly the aggregation part.
- Contact: tech-userlevel
Right now /usr/games/gomoku is not very smart. This has two consequences: first, it's not that hard to beat it if one tries; and second, if you put it in a spot, it takes annoyingly long to play and chews up tons of RAM. On older hardware or a RAM-limited virtual machine, it'll run out of memory and bail.
A few years ago when I (dholland) looked into this, there was a small but apparently nonempty community of AI researchers working on gomoku brains. There was also some kind of interface standard for gomoku brains such that they could be conveniently run against one another.
This project is to:
- track down the latest version of that standard and add external brain support to our gomoku(6);
- package a good gomoku brain in pkgsrc if there's a suitable open-source one;
- based on available research, improve the brain built into our gomoku (doesn't have to be competitive, just not terrible);
- also (if applicable) improve the framework the built-in brain code uses so it can back out if it runs out of memory.
Some of these issues were originally raised in http://gnats.netbsd.org/3126.
Note to passersby: "gomoku" is not the same game as "go".
- Contact: tech-toolchain
- Duration estimate: 3-4 months
Currently BSD make emits tons and tons of stat(2) calls in the course of figuring out what to do, most notably (but not only) when matching suffix rules. This causes measurable amounts of overhead, especially when there are a lot of files like in e.g. libc. First step is to quantify this overhead so you can tell what you're accomplishing.
Fixing this convincingly requires a multi-step approach: first, give make an abstraction layer for directories it's working with. This alone is a nontrivial undertaking because make is such a mess inside. This step should also involve sorting out various issues where files in other directories (other than the current directory) sometimes don't get checked for changes properly.
Then, implement a cache under that abstraction layer so make can remember what it's learned instead of making the same stat calls over and over again. Also, teach make to scan a directory with readdir() instead of populating the cache by making thousands of scattershot stat() calls, and implement a simple heuristic to decide when to use the readdir method.
Unfortunately, in general after running a program the cache has to be flushed because we don't know what the program did. The final step in this project is to make use of kqueue or inotify or similar when running programs to synchronize make's directory cache so it doesn't have to be flushed.
As an additional step one might also have make warn when a recipe touches files it hasn't been declared to touch... but note that while this is desirable it is also somewhat problematic.
- Contact: tech-kern
- Mentors: Taylor R Campbell
- Duration estimate: 350h
When developing device drivers inside the kernel, mistakes will usually cause the whole kernel to crash unrecoverably and require a reboot. But device drivers need not run inside the kernel: with rump, device driver code can be run as a process inside userland.
However, userland code has only limited access to the hardware registers needed to control devices: currently, NetBSD supports only USB device drivers in userland, via ugen(4). NetBSD should additionally support developing PCI drivers in userland with rump -- at least one driver, iwm(4), was developed using rump, but on a Linux host!
There are several milestones to this project:
Implement enough of the bus_space(9) and
pci_mapreg()
(pci(9)) APIs to map device registers from PCI BARs, using a pci(4) device (/dev/pciN). A first approximation can be done using pci(3) and simply mmapping from mem(4) (/dev/mem), but it would be better to cooperate with the kernel so that the kernel can limit the user to mapping ranges listed in PCI BARs without granting privileges to read all physical pages in the system. Cooperation with the kernel will also be necessary to implement port I/O instead of memory-mapped I/O, without raising the I/O privilege level of the userland process, on x86 systems.Expose PCI interrupts as events that can be read from a pci(4) (/dev/pciN) device instance, and use that to implement the pci_intr(9) API in userland. For many devices, this may require a small device-specific shim in the kernel to acknowledge interrupts while they are masked -- but that is a small price to pay for rapidly iterating driver development in userland.
Devise a scheme for userland allocate and map memory for DMA in order to implement bus_dma(9). Again, a first approximation can be done by simply wiring pages with mlock(3) and then asking the kernel for a virtual-to-physical translation to program hardware DMA registers. However, this will not work on any machines with an IOMMU, which would help to prevent certain classes of catastrophic memory corruption in the case of a buggy driver. Cooperation with the kernel, and maintaining a kernel-side mirror of each
bus_dmamem
allocation and eachbus_dmamap
.
Inspiration may be found in the Linux uio framework. This project is not necessarily PCI-specific -- ideally, most of the code to manage bus_space(9), bus_dma(9), and interrupt event delivery should be generic. The focus is PCI because it is widely used and would be applicable to many drivers and devices for which someone has yet to write drivers.
- Contact: tech-kern
- Mentors: Taylor R Campbell
- Duration estimate: 350h
NetBSD configures a timer device to deliver a periodic timer interrupt, usually every 10 ms, in order to count time, wake threads that are sleeping, etc. This made sense when timer devices were expensive to program and CPUs ran at MHz. But today, CPUs run at GHz; timers on modern x86, arm, mips, etc. hardware are cheap to reprogram; programs expect greater than 10 ms resolution for sleep times; and mandatory periodic activity on idle machines wastes power.
There are four main milestones to this project:
Choose a data structure for high-resolution timers, and a way to request high-resolution vs low-resolution sleeps, and adapt the various timeout functions (
cv_timedwait
, etc.) to use it. The current call wheel data structure for callouts provides good performance, but only for low-resolution sleeps. We need another data structure that provides good performance for high-resolution sleeps without hurting the performance of the existing call wheel for existing applications.Design a machine-independent high-resolution timer device API, implement it on a couple machines, and develop tests to confirm that it works. This might be done by adapting the
struct timecounter
interface to arm it for an interrupt, or might be done another way.Convert all the functions of the periodic 10 ms timer,
hardclock
, to schedule activity only when needed.Convert the various software subsystems that rely on periodic timer interrupts every tick, or every second, via callout(9), either to avoid periodic work altogether, or to batch it up only when the machine is about to go idle, in order to reduce the number of wakeups and thereby reduce power consumption.
- Contact: tech-kern
- Duration estimate: 3 months
Programmatic interfaces in the system (at various levels) are currently specified in C: functions are declared with C prototypes, constants with C preprocessor macros, etc. While this is a reasonable choice for a project written in C, and good enough for most general programming, it falls down on some points, especially for published/supported interfaces. The system call interface is the most prominent such interface, and the one where these issues are most visible; however, other more internal interfaces are also candidates.
Declaring these interfaces in some other form, presumably some kind of interface description language intended for the purpose, and then generating the corresponding C declarations from that form, solves some of these problems. The goal of this project is to investigate the costs and benefits of this scheme, and various possible ways to pursue it sanely, and produce a proof-of-concept implementation for a selected interface that solves a real problem.
Note that as of this writing many candidate internal interfaces do not really exist in practice or are not adequately structured enough to support this treatment.
Problems that have been observed include:
Using system calls from languages that aren't C. While many compilers and interpreters have runtimes written in C, or C-oriented foreign function interfaces, many do not, and rely on munging system headers in (typically) ad hoc ways to extract necessary declarations. Others cut and paste declarations from system headers and then break silently if anything ever changes.
Generating test or validation code. For example, there is some rather bodgy code attached to the vnode interface that allows crosschecking the locking behavior observed at runtime against a specification. This code is currently generated from the existing vnode interface specification in vnode_if.src; however, the specification, the generator, and the output code all leave a good deal to be desired. Other interfaces where this kind of validation might be helpful are not handled at all.
Generating marshalling code. The code for (un)marshalling system call arguments within the kernel is all handwritten; this is laborious and error-prone, especially for the various compat modules. A code generator for this would save a lot of work. Note that there is already an interface specification of sorts in syscalls.master; this would need to be extended or converted to a better description language. Similarly, the code used by puffs and refuse to ship file system operations off to userland is largely or totally handwritten and in a better world would be automatically generated.
Generating trace or debug code. The code for tracing system calls and decoding the trace results (ktrace and kdump) is partly hand-written and partly generated with some fairly ugly shell scripts. This could be systematized; and also, given a complete specification to work from, a number of things that kdump doesn't capture very well could be improved. (For example, kdump doesn't decode the binary values of flags arguments to most system calls.)
Add your own here. (If proposing this project for GSOC, your proposal should be specific about what problem you're trying to solve and how the use of an interface definition language will help solve it. Please consult ahead of time to make sure the problem you're trying to solve is considered interesting/worthwhile.)
The project requires: (1) choose a target interface and problem; (2) evaluate existing IDLs (there are many) for suitability; (3) choose one, as roll your own should be only a last resort; (4) prepare a definition for the target interface in your selected IDL; (5) implement the code generator to produce the corresponding C declarations; (6) integrate the code generator and its output into the system (this should require minimal changes); (7) implement the code generator and other material to solve your target problem.
Note that while points 5, 6, and 7 are a reasonably simple matter of programming, parts 1-3 and possibly 4 constitute research -- these parts are as important as the code is, maybe more so, and doing them well is critical to success.
The IDL chosen should if at all possible be suitable for repeating steps 4-7 for additional choices of target interface and problem, as there are a lot of such choices and many of them are probably worth pursuing in the long run.
Some things to be aware of:
- syscalls.master contains a partial specification of the functions in the system call interface. (But not the types or constants.)
- vnode_if.src contains a partial specification of the vnode interface.
- There is no specification, partial or otherwise, of the other parts of the VFS interface, other than the code and the (frequently outdated) section 9 man pages.
- There are very few other interfaces within the kernel that are (as of this writing) structured enough or stable enough to permit using an IDL without regretting it. The bus interface might be one.
- Another possible candidate is to pursue a specification language for handling all the conditional visibility and namespace control material in the userspace header files. This requires a good deal of familiarity with the kinds of requirements imposed by standards and the way these requirements are generally understood and is a both large and extremely detail-oriented.
Note: the purpose of an IDL is not to encode a parsing of what are otherwise C-language declarations. The purpose of an IDL is to encode some information about the semantics of an interface. Parsing the input is the easy part of the problem. For this reason, proposals that merely suggest replacing C declarations with e.g. equivalent XML or JSON blobs are unlikely to be accepted.
(Also note that XML-based IDLs are unlikely to be accepted by the community as XML is not in general considered suitable as a source language.)
- Contact: tech-kern
- Duration estimate: 2-3 months
The fdiscard system call, and the file-system-level VOP_DISCARD operation backing it, allow dropping ranges of blocks from files to create sparse files with holes. This is a simple generalization of truncate.
Discard is not currently implemented for FFS; implement it. (For regular files.) It should be possible to do so by generalizing the existing truncate code; this should be not super-difficult, but somewhat messy.
Note that it's important (for both the WAPBL and non-WAPBL cases) to preserve the on-disk invariants needed for successful crash recovery.
- Contact: tech-kern
- Duration estimate: 1-2 months
Implement the block-discard operation (also known as "trim") for RAIDframe.
Higher-level code may call discard to indicate to the storage system that certain blocks no longer contain useful data. The contents of these blocks do not need to be preserved until they are later written again. This means, for example, that they need not be restored during a rebuild operation.
RAIDframe should also be made to call discard on the disk devices underlying it, so those devices can take similar advantage of the information. This is particularly important for SSDs, where discard ("trim") calls can increase both the performance and write lifetime of the device.
The complicating factor is that a block that has been discarded no longer has stable contents: it might afterwards read back as zeros, or it might not, or it might change to reading back zeros (or trash) at any arbitrary future point until written again.
The first step of this project is to figure out a suitable model for the operation of discard in RAIDframe. Does it make sense to discard single blocks, or matching blocks across stripes, or only whole stripe groups, or what? What metadata should be stored to keep track of what's going on, and where does it go? Etc.
- Contact: tech-kern
- Mentors: Greg Oster
- Duration estimate: 175h
Implement component 'scrubbing' in RAIDframe.
RAIDframe (raid(4)) provides various RAID levels to NetBSD, but currently has no facilities for 'scrubbing' (checking) the components for read errors.
Milestones:
- implement a generic scrubbing routine that can be used by all RAID types
- implement RAID-level-specific code for component scrubbing
- add an option to raidctl (raidctl(8)) to allow the system to run the scrubbing as required
- update raidctl (raidctl(8)) documentation to reflect the new scrubbing capabilities, and discuss what scrubbing can and cannot do (benefits and limitations)
Bonus:
- Allow the user to select whether or not to attempt to 'repair' errors
- Actually attempt to repair errors
Abstract:
Xen now supports the ARM cpu architecture. We still don't have support in NetBSD for this architecture. A dom0 would be the first start in this direction.
Deliverables:
- toolstack port to arm
- Fully functioning dom0
This project would be much easier once pvops/pvh is ready, since the Xen ARM implementation only supports PVH.
See http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions
- Contact: port-xen
Abstract: The blktap2 method provides a userland daemon to provide access to on disk files, arbitrated by the kernel. The NetBSD kernel lacks this support and currently disables blktap in xentools.
Deliverables:
- blktap2 driver for /dev/blktapX
- Tool support to be enabled in xentools
- Enable access to various file formats (Qcow, VHD etc) , via the [ tap: ] disk specification.
Implementation:
The driver interacts with some areas of the kernel that are different from linux - eg: inter-domain page mapping, interdomain signalling. This may make it more challenging than a simple driver, especially with test/debug/performance risk.
Note: There's opportunity to compare notes with the FUSE/rump implementations.
See http://wiki.xenproject.org/wiki/Blktap and http://xenbits.xensource.com/hg/xen-unstable.hg/file/tip/tools/blktap2/README
- Contact: port-xen
- Mentors: Cherry G. Mathew
- Duration estimate: 37 hours
Abstract:
The NetBSD boot path contains conditional compiled code for XEN. In the interests of moving towards unified pvops kernels, the code can be modified to be conditionally executed at runtime.
Deliverables:
- Removal of #ifdef conditional code from all source files.
- New minor internal API definitions, where required, to encapsulate platform specific conditional execution.
- Performance comparison figures for before and after.
- Clear up the API clutter to make way for pvops.
Note: Header files are exempt from #ifdef removal requirement.
Implementation:
The number of changes required are estimated as follows:
drone.sorcerer> pwd
/home/sorcerer/src/sys
drone.sorcerer> find . -type f -name '*.[cSs]' -exec grep -H "^#if.* *\\bXEN\\b" '{}' \; | wc -l
222
drone.sorcerer>
The effort required in this task is likely to be highly variable, depending on the case. In some cases, minor API design may be required.
Most of the changes are likely to be mechanical.
- Contact: port-xen
- Duration estimate: 64 hours
Abstract: Dom0 is not SMP safe right now. The main culprits are the backend drivers for blk and netif (and possibly others, such as pciback)
Deliverables:
- SMP capable dom0
Implementation:
This involves extensive stress testing of the dom0 kernel for concurrency and SMP workloads. Locking in various backend and other drivers need to be reviewed, reworked and tested.
Interrupt paths need to be reviewed, and the interrupt handling code needs to be reworked, potentially. The current event code doesn't multiplex well on vcpus. This needs reinvestigation.
This is a test/debug heavy task, since MP issues can crop up in various unrelated parts of the kernel.
- Contact: port-xen
- Duration estimate: 48 hours
Abstract: This is the first step towards PVH mode. This is relevant only for DomU. Speeds up amd64 - PV on amd64 has sub-optimal TLB and syscall overhead due to privilege level sharing between kernel and userland.
Deliverables:
- operational PV drivers (netif.h, blkif.h) on HVM mode.
Implementation:
- Xen needs to be detected at boottime.
- The shared page and hypercall interface need to be setup.
- xenbus(4) attachment during bootime configuration.
Scope (Timelines):
This project involves intrusive kernel configuration changes, since NetBSD currently has separate native (GENERIC) and xen (XEN3_DOMU) kernels. A "hybrid" build is required to bring in the PV drivers and xen infrastructure support such as xenbus(4).
Once the hypercall page is up however, progress should be faster.
See:
http://wiki.xenproject.org/wiki/XenParavirtOps and http://wiki.xenproject.org/wiki/PV_on_HVM
- Contact: port-xen
- Duration estimate: 16 hours
Abstract:
Although Xen has ACPI support, the kernel needs to make explicit hypercalls for power management related functions.
sleep/resume are supported, and this support needs to be exported via the dom0 kernel.
Deliverables:
- sleep/resume support for NetBSD/Xen dom0
Implementation:
There are two approaches to this. The first one involves using the kernel to export syscall nodes for ACPI.
The second one involves Xen aware power scripts, which can write directly to the hypercall kernfs node - /kern/privcmd. Device tree suspend could happen via drvctl(8), in theory.
- Contact: port-xen
- Duration estimate: 16 hours
Abstract: The pvfb video driver and the keyboard/point driver are useful for gui interaction with a domU console, including gui.
Deliverables:
A framebuffer device and console, which can be used to interact with the domU. The qemu process on the dom0 can then use this buffer to display to the regular X server using the SDL, as it does with HVM domUs. This is very useful for gui frontends to xen.
Implementation:
A simple framebuffer driver is implemented, along the lines of: http://xenbits.xensource.com/hg/linux-2.6.18-xen.hg/file/ca05cf1a9bdc/drivers/xen/fbfront/
- Contact: port-xen
- Duration estimate: 32 hours
Abstract:
Make the dom0 kernel use and provide drm(4) support. This will enable 3d acceleration within of dom0, making it more reasonable to run a dom0 instead of GENERIC for a workstation that also supports domUs.
Deliverables:
Functioning X server using native kernel-style drmkms support. This must function at least in a PVH style dom0 kernel.
Implementation:
- mtrr support for Xen
- high memory RAM extent allocation needs special attention (See: kern/49330)
- native driver integration and testing.
- Contact: port-xen
- Duration estimate: 64 hours
Abstract: This project enables NetBSD to have physical ram pages dynamically added via uvm_page_physload() and dynamically removed using a complimentary function.
Rationale: Makes NetBSD a viable datacentre OS, with hotspare RAM add/remove support.
Deliverables:
- dynamic physical page add/remove support
- Xen balloon driver usage of this support.
Implementation:
Further api and infrastructural changes may be required to accommodate these changes. The physical page management code will need to be enhanced to more than a fixed array, while taking into account memory attribute related tagging information.
- Contact: port-xen
Abstract: This project involves allowing SCSI devices on the dom0 to be passed through to the domU. Individual SCSI devices can be passed through dynamically. The domU needs to have a "frontend" driver that can communicate with the backend driver which runs on the dom0 and takes care of arbitrating and sometimes duplicating the requests.
Deliverables:
- Functioning backend and frontend drivers for dom0 and domU respectively.
- Make available high performance SCSI device to PV and PVHVM domains.
- Contact: port-xen
Abstract: This project involves allowing USB devices on the dom0 to be passed through to the domU. Individual USB devices can be passed through dynamically. The domU needs to have a "frontend" driver that can communicate with the backend driver which runs on the dom0 and takes care of arbitrating and sometimes duplicating the requests.
Deliverables:
- Functioning backend and frontend drivers for dom0 and domU respectively.
- Make available USB device passthrough to PV and PVHVM domains.
Implementation:
Since the feature is under development in Xen, a frontend implementation is advisable to start to engage with the project. Once we have a functioning frontend on a non-NetBSD dom0, then we can engage with the community to develop the backend.
- Contact: port-xen
Abstract: Current libvirt support in NetBSD doesn't include Xen support.
Rationale: Enables gui based domU managers. Enables the pvfb work to be easily accessible
Deliverables:
- Fully functioning libvirt with Xen "driver" support.
See http://wiki.xenproject.org/wiki/Libxl_event_API_improvements and http://libvirt.org/drvxen.html
- Contact: port-xen
- Mentors: Cherry G. Mathew
- Duration estimate: 16 hours
Abstract: NetBSD/Xen currently doesn't support __HAVE_DIRECT_MAP
Deliverables:
- Direct Mapped functioning NetBSD/Xen, both dom0 and domU
- Demonstrable performance numbers
- Reduce TLB contention and clean up code for pvops/pvh
Implementation: This change involves testing for the Xen and CPU featureset, and tweaking the pmap code so that 1G superpages can be requested from the hypervisor and direct mapped the same way native code does.
- Contact: tech-userlevel
- Mentors: David Holland
- Duration estimate: 350h
Add a query optimizer to find(1).
Currently find builds a query plan for its search, and then executes it with little or no optimization. Add an optimizer pass on the plan that makes it run faster.
Things to concentrate on are transforms that allow skipping I/O: not calling stat(2) on files that will not be matched, for example, or not recursing into subdirectories whose contents cannot ever match. Combining successive string matches into a single match pattern might also be a win; so might precompiling match patterns into an executable match form (like with regcomp(3)).
To benefit from many of the possible optimizations it may be necessary to extend the fts(3) interface and/or extend the query plan schema or the plan execution logic. For example, currently it doesn't appear to be possible for an fts(3) client to take advantage of file type information returned by readdir(3) to avoid an otherwise unnecessary call to stat(2).
Step 1 of the project is to choose a number of candidate optimizations, and for each identify the internal changes needed and the expected benefits to be gained.
Step 2 is to implement a selected subset of these based on available time and cost/benefit analysis.
It is preferable to concentrate on opportunities that can be found in find invocations likely to actually be typed by users or issued by programs or infrastructure (e.g. in pkgsrc), vs. theoretical opportunities unlikely to appear in practice.
- Contact: tech-kern
- Mentors: Christos Zoulas
- Duration estimate: 175h
Test and debug the RAID 6 implementation in RAIDframe.
NetBSD uses RAIDframe (raid(4)) and stub code exists for RAID6 but is not very documented or well tested. Obviously, this code needs to be researched and vetted by an interested party. Other BSD projects should be consulted freely.
Milestones: * setup a working RAID 6 using RAIDFrame * document RAID6 in raid(4) manual page * port/develop a set of reliability and performance tests * fix bugs along the way * automate RAID testing in atf
Bonus: * Document how to add new RAID levels to RAIDframe * (you're awesome bonus) add RAID 1+0, etc
- Contact: tech-kern
- Duration estimate: 1-2 months
The mlock() system call locks pages in memory; however, it's meant for real-time applications that wish to avoid pageins.
There's a second class of applications that want to lock pages in memory: databases and anything else doing journaling, logging, or ordered writes of any sort want to lock pages in memory to avoid pageouts. That is, it should be possible to lock a (dirty) page in memory so that it does not get written out until after it's unlocked.
It is a bad idea to try to make mlock() serve this purpose as well as the purpose it already serves, so add a new call, perhaps mlockin(), and implement support in UVM.
Then for extra credit ram it through POSIX and make Linux implement it
- Contact: tech-kern, tech-net, David Holland
Add publish-subscribe sockets to AF_UNIX (filesystem sockets).
The idea of a publish-subscribe socket (as opposed to a traditional socket) is that anything written to one end is read out by all processes reading from the other end.
This raises some issues that need attention; most notably, if a process doesn't read data sent to it, how long do we wait (or how much data do we allow to accumulate) before dropping the data or disconnecting that process?
It seems that one would want to be able to have both SOCK_STREAM and SOCK_DGRAM publish/subscribe channels, so it isn't entirely clear yet how best to graft this functionality into the socket API. Or the socket code. It might end up needing to be its own thing instead, or needing to extend the socket API, which would be unfortunate but perhaps unavoidable.
The goal for these sockets is to provide a principled and robust scheme for passing notices around. This will allow replacing multiple existing ad hoc callback schemes (such as for notices of device insertions) and also to allow various signaling schemes among user processes that have never been very workable in Unix. Those who remember AREXX will recognize the potential; but of course it will take time before anything like that can be orchestrated.
For this reason it's important that it be reasonably easy for one of the endpoints to be inside the kernel; but it's also important that peer credentials and permissions work.
Another goal is to be able to use publish/subscribe sockets to provide a compatibility library for Linux desktop software that wants to use dbus or any successor to dbus.
I'm marking this project hard as it's still only about half baked.
- Contact: tech-kern, tech-embed
- Duration estimate: 2-3 months
Add support to UVM to allow processes to map pages directly from underlying objects that are already mapped into memory: for example, files in tmpfs; files in mfs; and also files on memory-mapped storage devices, like raw flash chips or experimental storage-class memory hardware.
This allows accesses (most notably, execution of user programs) to go directly to the backing object without copying the pages into main system memory. (Which in turn has obvious advantages.)
Note that uebayasi@ was working on this at one point but his changes did not get merged.
- Contact: tech-kern
- Duration estimate: 2 months
Add per-user memory usage limits (also known as "quotas") to tmpfs, using the existing quota system infrastructure.
- Contact: tech-kern
- Duration estimate: 2 months
While currently we have the cgd(4) driver for encrypting disks, setting it up is fairly involved. Furthermore, while it's fairly easy to use it just for /home, in an ideal world the entire disk should be encrypted; this leads to some nontrivial bootstrapping problems.
Develop a scheme for mounting root on cgd that does not require explicit manual setup, that passes cryptographic muster, and that protects everything on the root volume except for what absolutely must be exposed. Implement it.
The following is a non-exhaustive list of issues to consider:
- How should we tell when root should be on cgd (perhaps in boot.cfg?)
- When (and how) do we enter the passphrase needed to mount root (at mount-root time? in the bootloader? after mounting a fake root?)
- Key management for the encryption passphrase
- Where to keep the bootloader and/or kernels
- Should the bootloader be able to read the cgd to get the boot kernel from it?
- If the kernel is not on cgd, should it be signed to allow the bootloader to verify it?
- Integration with sysinst so all you need to do to get FDE is to hit a checkbox
- Perhaps, making it easy or at least reasonably possible to migrate an unencrypted root volume to cgd
Note that while init(8) currently has a scheme for mounting a temporary root and then chrooting to the real root afterwards, it doesn't work all that well. Improving it is somewhat difficult; also, ideally init(8) would be on the encrypted root volume. It would probably be better to support mounting the real root directly on cgd.
Another option is a pivot_root type of operation like Linux has, which allows mounting a fake root first and then shuffling the mount points to move something else into the / position. This has its drawbacks as well, and again ideally there would be no unencrypted fake root volume.
- Contact: tech-kern
- Mentors: Taylor R Campbell
- Duration estimate: 350h
The current asynchronous I/O (aio) implementation works by forking a background thread inside the kernel to do the I/O synchronously. This is a starting point, but one thread limits the amount of potential parallelism, and adding more threads falls down badly when applications want to have large numbers of requests outstanding at once.
Furthermore, the existing code doesn't even work in some cases; this is not well documented but there have been scattered reports of problems that nobody had time to look into in detail.
In order to make asynchronous I/O work well, the I/O path needs to be revised, particularly in the kernel's file system interface, so that all I/O operations are asynchronous requests by default. It is easy for high-level code to wait synchronously for lower-level asynchronous requests to complete; it is much more problematic for an asynchronous request to call into code that expects to be synchronous.
The ?flow control project, which also requires substantial revisions to the I/O path, naturally combines with this project.
- Contact: tech-kern,
NetBSD has preliminary DTrace support, so it supports SDT and FBT provider only.
riz@ has syscall provider patch.
- Contact: tech-kern
The Ext4 file system is standard file system for Linux (recent Ubuntu, and somewhat old Fedora). It is the successor of the Ext3 file system.
It has journaling, larger file system volumes, and improved timestamp etc features.
The NetBSD kernel support and accompanying userland code should be written.
It is not clear at the moment if this should be done by working on the existing ext2 code or not, and if not, whether it should be integrated with the ufs code or not.
- Contact: tech-kern
Linux provides an ISC-licensed Broadcom SoftMAC driver. Source code is included in Linux kernel tree,
- Contact: tech-ports
Xilinx MicroBlaze is RISC processort for Xilinx's FPGA chip. MicroBlaze can have MMU, and NetBSD can support it.
- Contact: tech-pkg
Google Chrome is widely used nowadays, and it has some state-of-the-art functionalities.
Chromium web browser is open source edition of Google Chrome.
Currently Chromium is present in pkgsrc-wip as wip/chromium. Please see wip/chromium/TODO for a TODO list about it.
- Contact: tech-pkg
If one embarks on a set of package updates and it doesn't work out too well, it is nice to be able to roll back to a previous state. This entails two things: first, reverting the set of installed packages to what it was at some chosen prior time, and second, reverting changes to configuration, saved application state, and other such material that might be needed to restore to a fully working setup.
This project is about the first part, wihch is relatively tractable. The second part is Hard(TM), but it's pointless to even speculate about it until we can handle the first part, which we currently can't. Also, in many cases the first part is perfectly sufficient to recover from a problem.
Ultimately the goal would be to be able to do something like pkgin --revert yesterday but there are a number of intermediate steps to that to provide the infrastructure; after that adding the support to pkgin (or whatever else) should be straightforward.
The first thing we need is a machine-readable log somewhere of package installs and deinstalls. Any time a package is installed or removed, pkg_install adds a record to this log. It also has to be possible to trim the log so it doesn't grow without bound -- it might be interesting to keep the full history of all package manipulations over ten years, but for many people that's just a waste of disk space. Adding code to pkg_install to do this should be straightforward; the chief issues are
- choosing the format
- deciding where to keep the data
both of which require some measure of community consensus.
Preferably, the file should be text-based so it can be manipulated by hand if needed. If not, there ought to be some way to recover it if it gets corrupted. Either way the file format needs to be versioned and extensible to allow for future changes.
The file format should probably have a way to enter snapshots of the package state in addition to change records; otherwise computing the state at a particular point requires scanning the entire file. Note that this is very similar to deltas in version control systems and there's a fair amount of prior art.
Note that we can already almost do this using /var/backups/work/pkgs.current,v; but it only updates once a day and the format has assorted drawbacks for automated use.
The next thing needed is a tool (maybe part of pkg_info, maybe not) to read this log and both (a) report the installed package state as of a particular point in time, and (b) print the differences between then and now, or between then and some other point in time.
Given these two things it becomes possible to manually revert your installed packages to an older state by replacing newer packages with older packages. There are then two further things to do:
Arrange a mechanism to keep the .tgz files for old packages on file.
With packages one builds oneself, this can already be done by having them accumulate in /usr/pkgsrc/packages; however, that has certain disadvantages, most notably that old packages have to be pruned by hand. Also, for downloaded binary packages no comparable scheme exists yet.
Provide a way to compute the set of packages to alter, install, or remove to switch to a different state. This is somewhat different from, but very similar to, the update computations that tools like pkgin and pkg_rolling-replace do, so it probably makes sense to implement this in one or more of those tools rather than in pkg_install; but perhaps not.
There are some remaining issues, some of which aren't easily solved without strengthening other things in pkgsrc. The most notable one is: what about package options? If I rebuild a package with different options, it's still the "same" package (same name, same version) and even if I record the options in the log, there's no good way to distinguish the before and after binary packages.
- Contact: tech-x11, tech-toolchain,
Some of the games in base would be much more playable with even a simple graphic interface. The card games (e.g. canfield, cribbage) are particularly pointed examples; but quite a few others, e.g. atc, gomoku, hunt, and sail if anyone ever fixes its backend, would benefit as well.
There are two parts to this project: the first and basically mostly easy part is to pick a game and write an alternate user interface for it, using SDL or gtk2 or tk or whatever seems appropriate.
The hard part is to arrange the system-level infrastructure so that the new user interface appears as a plugin for the game that can be installed from pkgsrc but that gets found and run if appropriate when the game is invoked from /usr/games. The infrastructure should be sufficiently general that lots of programs in base can have plugins.
Some things this entails:
possibly setting up a support library in the base system for program plugins, if it appears warranted;
setting up infrastructure in the build system for programs with plugins, if needed;
preferably also setting up infrastructure in the build system for building plugins;
choosing a place to put the header files needed to build external plugins;
choosing a place to put plugin libraries, too, as there isn't a common model out there yet;
establishing a canonical way for programs in base to find things in pkgsrc, which is not technically difficult but will require a lot of wrangling to reach a community consensus;
setting up any pkgsrc infrastructure needed to build plugin packages for base (this should take little or no work).
It is possible that plugins warrant a toplevel directory under each prefix rather than being stuffed in lib/ directories; e.g. /usr/plugins, /usr/pkg/plugins, /usr/local/plugins, so the pkgsrc-installed plugins for e.g. rogue would go by default in /usr/pkg/plugins/rogue.
Note that while this project is disguised as a frivolous project on games, the infrastructure created will certainly be wanted in the medium to long term for other more meaty things. Doing it on games first is a way to get it done without depending on or conflicting with much of anything else.
- Contact: tech-userlevel, tech-x11,
In MacOS X, while logged in on the desktop, you can switch to another user (leaving yourself logged in) via a system menu.
The best approximation to this we have right now is to hit ctrl-alt-Fn, switch to a currently unused console, log in, and run startx with an alternate screen number. This has a number of shortcomings, both from the point of view of general polish (logging in on a console and running startx is very untidy, and you have to know how) and of technical underpinnings (starting multiple X servers uses buckets of memory, may cause driver or drmgem issues, acceleration may not work except in the first X server, etc.)
Ideally we'd have a better scheme for this. We don't necessarily need something as slick as OS X provides, and we probably don't care about Apple's compositing effects when switching, but it's useful to be able to switch users as a way of managing least privilege and it would be nice to make it easy to do.
The nature of X servers makes this difficult; for example, it isn't in any way safe to use the same X server for more than one user. It also isn't safe to connect a secure process to a user's X server to display things.
It seems that the way this would have to work is akin to job control: you have a switching supervisor process, which is akin to a shell, that runs as root in order to do authentication and switches (or starts) X servers for one or more users. The X servers would all be attached, I guess, to the same graphics backend device (wsfb, maybe? wsdrmfb?) and be switched in and out of the foreground in much the way console processes get switched in and out of the foreground on a tty.
You have to be able to get back to the switching supervisor from whatever user and X server you're currently running in. This is akin to ^Z to get back to your shell in job control. However, unlike in job control there are security concerns: the key combination has to be something that a malicious application, or even a malicious X server, can't intercept. This is the "secure attention key". Currently even the ctrl-alt-Fn sequences are handled by the X server; supporting this will take quite a bit of hacking.
Note that the secure attention key will also be wanted for other desktop things: any scheme that provides desktop-level access to system configuration needs to be able to authenticate a user as root. The only safe way to do this is to notify the switching supervisor and then have the user invoke the secure attention key; then the user authenticates to the switching supervisor, and that in turn provides some secured channel back to the application. This avoids a bunch of undesirable security plumbing as is currently found in (I think) GNOME; it also avoids the habit GNOME has of popping up unauthenticated security dialogs asking for the root password.
Note that while the switching supervisor could adequately run in text mode, making a nice-looking graphical one won't be difficult. Also that would allow the owner of the machine to configure the appearance (magic words, a custom image, etc.) to make it harder for an attacker to counterfeit the thing.
It is probably not even possible to think about starting this project until DRM/GEM/KMS stuff is more fully deployed, as the entire scheme presupposes being able to switch between X servers without the X servers' help.
- Contact: tech-userlevel, tech-x11,
Support for "real" desktops on NetBSD is somewhat limited and also scattershot. Part of the reason for this is that most of the desktops are not especially well engineered, and they contain poorly designed infrastructure components that interact incestuously with system components that are specific to and/or make sense only on Linux. Providing the interfaces these infrastructure components wish to talk to is a large job, and Linux churn being what it is, such interfaces are usually moving targets anyway. Reimplementing them for NetBSD is also a large job and often presupposes things that aren't true or would require that NetBSD adopt system-level design decisions that we don't want.
Rather than chase after Linux compatibility, we are better off designing the system layer and infrastructure components ourselves. It is easier to place poorly conceived interfaces on top of a solid design than to reimplement a poorly conceived design. Furthermore, if we can manage to do a few things right we should be able to get at least some uptake for them.
The purpose of this project, per se, is not to implement any particular piece of infrastructure. It is to identify pieces of infrastructure that the desktop software stacks need from the operating system, or provide themselves but ought to get from the operating system, figure out solid designs that are compatible with Unix and traditional Unix values, and generate project descriptions for them in order to get them written and deployed.
For example, GNOME and KDE rely heavily on dbus for sending notifications around. While dbus itself is a more or less terrible piece of software that completely fails to match the traditional Unix model (rather than doing one thing well, it does several things at once, pretty much all badly) it is used by GNOME and KDE because there are things a desktop needs to do that require sending messages around. We need a transport for such messages; not a reimplementation of dbus, but a solid and well-conceived way of sending notices around among running processes.
Note that one of the other things dbus does is start services up; we already have inetd for that, but it's possible that inetd could use some strengthening in order to be suitable for the purpose. It will probably also need to be taught about whatever message scheme we come up with. And we might want to e.g. set up a method for starting per-user inetd at login time.
(A third thing dbus does is serve as an RPC library. For this we already have XDR, but XDR sucks; it may be that in order to support existing applications we want a new RPC library that's more or less compatible with the RPC library parts of dbus. Or that may not be feasible.)
This is, however, just one of the more obvious items that arises.
Your goal working on this project is to find more. Please edit this page and make a list, and add some discussion/exposition of each issue and possible solutions. Ones that become fleshed out enough can be moved to their own project pages.
- Contact: tech-kern, David Holland
- Duration estimate: 6-12 months
Port or implement NFSv4.
As of this writing the FreeBSD NFS code, including NFSv4, has been imported into the tree but no work has yet been done on it. The intended plan is to port it and eventually replace the existing NFS code, which nobody is particularly attached to.
It is unlikely that writing new NFSv4 code is a good idea, but circumstances might change.
- Contact: tech-kern
- Duration estimate: 3-6 months
Port Dragonfly's Hammer file system to NetBSD.
Note that because Dragonfly has made substantial changes (different from NetBSD's substantial changes) to the 4.4BSD vfs layer this port may be more difficult than it looks up front.
- Contact: tech-pkg, Leonardo Taccari
An important part of a binary-only environment is to support something similar to the existing options framework, but produce different combinations of pkgs.
There is an undefined amount of work required to accomplish this, and also a set of standards and naming conventions to be applied for the output of these pkgs.
This project should also save on having to rebuild "base" components of software to support different options: see amanda-base for an example.
OpenBSD supports this now and should be leveraged heavily for borrowing.
Preliminary work were done during Google Summer of Code 2016 as part of Split debug symbols for pkgsrc builds project. However, the support is far from being complete. In particular to complete the project the following tasks need to be accomplished:
- Complete the
SUBPACKAGES
support similar to what was done during the GSoC (withSUBPACKAGES
and!SUBPACKAGES
case code duplication) - When the
SUBPACKAGES
support is complete we can switch to an implicit (and hidden) subpackage as suggested by ?joerg in order to get rid of code duplication and having a single control flow relative toSUBPACKAGES
. In other words: every package will always have at least one subpackage. - Adapt
mk/bsd.debugdata.mk
toSUBPACKAGES
and other possible other candidates liketex-*-doc
, etc. After doing that look at less trivial possibleSUBPACKAGES
candidate likedatabases/postegresql*-*
.
- Contact: tech-kern
- Duration estimate: 3 months
Lua is obviously a great choice for an embedded scripting language in both kernel space and user space.
However, there exists a issue where Lua needs bindings to interface with NetBSD for it to be useful.
Someone needs to take on the grunt-work task of simply creating Lua interfaces into every possible NetBSD component.
That same someone needs to produce meaningful documentation.
As this enhances Lua's usefulness, more lua bindings will then be created, etc.
This project's goal is to set that inertia by attempting to define and create a solid baseline of Lua bindings.
Here is a list of things to possibly target
- filesystem hooks
- sysctl hooks
- proplib
- envsys
- MI system bus code
WIP
- Contact: tech-kern
Right now kqueue, the kernel event mechanism, only attaches to individual files. This works great for sockets and the like but doesn't help for a directory full of files.
The end result should be feature parity with linux's inotify (a single dir worth of notifications, not necessarily sub-dirs) and the design must be good enough to be accepted by other users of kqueue: osx, freebsd, etc
For an overview of inotify start on wikipedia.
Another example API is the windows event api.
I believe most of the work will be around genfs, namei, and vfs.
- Contact: tech-pkg
- Mentors: Emile 'iMil' Heitor, Jonathan Perkin
- Duration estimate: 350h
pkgin is aimed at being an apt-/yum-like tool for managing pkgsrc binary packages. It relies on pkg_summary(5) for installation, removal and upgrade of packages and associated dependencies, using a remote repository.
While pkgin is now widely used and seems to handle packages installation, upgrade and removal correctly, there's room for improvement. In particular:
Main quest
- Support for multi-repository
- Speed-up installed packages database calculation
- Speed-up local vs remote packages matching
- Implement an automated-test system
- Handle conflicting situations (MySQL conflicting with MySQL...)
- Better logging for installed / removed / warnings / errors
To be confirmed / discussed:
- Make pkgin independent from pkg_install binaries, use pkg_install libraries or abstract them
Bonus I
In order to ease pkgin's integration with third party tools, it would be useful to split it into a library (libpkgin) providing all methods needed to manipulate packages, i.e., all pkgin's runtime capabilities and flags.
Bonus II (operation "burn the troll")
It would be a very nice addition to abstract SQLite backend so pkgin could be plugged to any database system. A proof of concept using bdb or cdb would fulfill the task.
Useful steps:
- Understand pkgsrc
- Understand pkg_summary(5)'s logic
- Understand SQLite integration
- Understand pkgin's `impact.c' logic
- Contact: tech-pkg
- Duration estimate: 1 month
Mancoosi is a project to measure distribution quality by the number of conflicts in a software distribution environment.
The main work of this project would be to analyse possible incompatibilities of pkgsrc and mancoosi (i.e., incompatibilities between Debian and pkgsrc), to write a converter from pkg_summary(5) to CUDF (the format used by Mancoosi). You will need OCaml for doing so, as the libraries used by Mancoosi are written in OCaml.
When you are a French student in the third year of your Bachelor, or first year of your Master, you could also do this project as an internship for your studies.
- Contact: port-xen
- Mentors: John Nemeth
- Duration estimate: 2 days (for initial support)
pygrub, which is part of the xentools package, allows booting a Linux domU without having to keep a copy of the kernel and ramdisk in the dom0 file system.
What it does is extract the grub.conf from the domU disk, processes it, and presents a grub like menu. It then extracts the appropriate kernel and ramdisk image from the domU disk, and stores temporary copies of them. Finally, it creates a file containing:
- kernel=\
- ramdisk=\
- extras=\
This file gets processed by xend as if it were part of the domU config.
Most of the code required to do the NetBSD equivalent should be available as part of /boot or the libraries it depends on. The path to the source code for /boot is: sys/arch/i386/stand/boot.
- Contact: Leonardo Taccari, tech-pkg
recent rpmbuild for redhat creates a second rpm for later install containing debug symbols.
This is a very convenient system for having symbols available for debugging but not clogging up the system until crashes happen and debugging is required.
The end result of this project should be a mk.conf flag or build target (debug-build) which would result in a set of symbol files gdb could use, and which could be distributed with the pkg.
Most parts of this project were done during Google Summer of Code 2016 as part of Split debug symbols for pkgsrc builds project. However, in order to be properly integrated in pkgsrc it will need to support multipkg, some little adjustements and more testing in the wild.
- Contact: tech-pkg
- Mentors: unknown
- Duration estimate: unknown
Currently, bulk build results are sent to the pkgsrc-bulk mailing list. To figure out if a package is or has been building successfully, or when it broke, one must wade through the list archives and search the report e-mails by hand. Furthermore, to figure out what commits if any were correlated with a breakage, one must wade through the pkgsrc-changes archive and cross-reference manually.
The project is to produce a web/database application that can be run from the pkgsrc releng website on NetBSD.org that tracks bulk build successes and failures and provides search and crossreferencing facilities.
The application should subscribe to the pkgsrc-bulk and pkgsrc-changes mailing lists and ingest the data it finds into a SQL database. It should track commits to each package (and possibly infrastructure changes that might affect all packages) on both HEAD and the current stable branch, and also all successful and failed build reports on a per-platform (OS and/or machine type) basis.
The web part of the application should be able to retrieve summaries of currently broken packages, in general or for a specific platform and/or specific branch. It should also be able to generate a more detailed report about a single package, containing for example which platforms it has been built on recently and whether it succeeded or not; also, if it is broken, how long it has been broken, and the history of package commits and version bumps vs. build results. There will likely be other searches/reports wanted as well.
The application should also have an interface for people who do partial or individual-package check builds; that is, it should be able to generate a list of packages that have not been built since they were last committed, on a given platform or possibly on a per-user basis, and accept results from attempting to build these or subsets of these packages. It is not entirely clear what this interface should be (and e.g. whether it should be command-line-based, web-based, or what, and whether it should be limited to developers) and it's reasonable to expect that some refinements or rearrangements to it will be needed after the initial deployment.
The application should also be able to record cross-references to the bug database. To begin with at least it's reasonable for this to be handled manually.
This project should be a routine web/database application; there is nothing particularly unusual about it from that standpoint. The part that becomes somewhat less trivial is making all the data flows work: for example, it is probably necessary to coordinate an improvement in the way bulk build results are tagged by platform. It is also necessary to avoid importing the reports that appear occasionally on pkgsrc-bulk from misconfigured pbulk installs.
Note also that "OS" and "machine type" are not the only variables that can affect build outcome. There are also multiple compilers on some platforms, for which the results should be tracked separately, plus other factors such as non-default installation paths. Part of the planning phase for this project should be to identify all the variables of this type that should be tracked.
Also remember that what constitutes a "package" is somewhat slippery as well. The pkgsrc directory for a package is not a unique key; multiversion packages, such as Python and Ruby extensions, generate multiple results from a single package directory. There are also a few packages where for whatever reason the package name does not match the pkgsrc directory. The best key seems to be the pkgsrc directory paired with the package-name-without-version.
Some code already exists for this, written in Python using Postgres. Writing new code (rather than working on this existing code) is probably not a good plan.
- Contact: tech-userlevel
- Mentors: David A. Holland
- Duration estimate: unknown
Font handling in Unix has long been a disgrace. Every program that needs to deal with fonts has had to do its own thing, and there has never been any system-level support, not even to the point of having a standard place to install fonts in the system. (While there was/is a place to put X11 fonts, X is only one of the many programs doing their own thing.)
Font management should be a system service. There cannot be one type of font file -- it is far too late for that -- but there should be one standardized place for fonts, one unified naming scheme for fonts and styles, and one standard way to look up and open/retrieve/use fonts. (Note: "one standardized place" means relative to an installation prefix, e.g. share/fonts.)
Nowadays fontconfig is capable of providing much of this.
The project is:
- Figure out for certain if fontconfig is the right solution. Note that even if the code isn't, the model may be. If fontconfig is not the right solution, figure out what the right solution is. Also ascertain whether existing applications that use the fontconfig API can be supported easily/directly or if some kind of possibly messy wrapper layer is needed and doing things right requires changing applications. Convince the developer community that your conclusions are correct so that you can go on to step 2.
1a. Part of this is identifying exactly what is involved in "managing" and "handling" fonts and what applications require.
Implement the infrastructure. If fontconfig is the right solution, this entails moving it from the X sources to the base sources. Also, some of the functionality/configurability of fontconfig is probably not needed in a canonicalized environment. All of this should be disabled if possible. If fontconfig is not the right solution, implement something else in base.
Integrate the new solution in base. Things in base that use fonts should use fontconfig or the replacement for fontconfig. This includes console font handling, groff, mandoc, and X11. Also, the existing fonts that are currently available only to subsets of things in base should be made available to all software through fontconfig or its replacement.
3a. It would also be useful to kill off the old xfontsel and replace it with something that interoperates fully with the new system and also has a halfway decent user interface.
Deploy support in pkgsrc. If the new system does not itself provide the fontconfig API, provide support for that via a wrapper package. Teach applications in pkgsrc that use fontconfig to recognize and use the new system. (This should be fairly straightforward and not require patching anything.) Make pkgsrc font installation interoperate with the new system. (This should be fairly straightforward too.) Take an inventory of applications that use fonts but don't use fontconfig. Patch one or two of these to use the new system to show that it can be done easily. If the new system is not fontconfig and has its own better API, take an inventory of applications that would benefit from being patched to use the new API. Patch one or two of these to demonstrate that it can be done easily. (If the answers from step 1 are correct, it should be fairly easy for most ordinary applications.)
Persuade the rest of the world that we've done this right and try to get them to adopt the solution. This is particularly important if the solution is not fontconfig. Also, if the solution is not fontconfig, provide a (possibly limited) version of the implementation as a pkgsrc package that can be used on other platforms by packages and applications that support it.
Note that step 1 is the hard part. This project requires a good bit of experience and familiarity with Unix and operating system design to allow coming up with a good plan. If you think it's obvious that fontconfig is the right answer and/or you can't see why there might be any reason to change anything about fontconfig, then this may not be the right project for you.
Because of this, this project is really not suitable for GSoC.
Dependency handling in pkgsrc is rather complex task. There exist some cases (TeX packages, Perl packages) where it is hard to find build dependencies precisely and the whole thing is handled conservatively. E.g. the whole TeXLive meta-package is declared a build dependency even when rather small fraction of it is used actually. Another case is stale heavy dependency which is no longer required but still listed as prerequisite.
It would be nice to have a tool (or a set of them, if necessary) to detect which installed packages, libraries or tools were actually used to build new package. Ideally, the tool should report files used during configure, build, and test stages, and packages these files are provided by.
Milestones: * find or develop a good dependency graph algorithm * implement and demonstrate your new system in pkgsrc by adding a make target * expose this algorithm for use by websites such as pkgsrc.se
There exist two ways of launching processes: one is forking them with fork or vfork and replacing the clone with exec-family function, another is spawning process directly with procedures like posix_spawn. Not all platforms implement fork model, and spawn model has its own merits.
pkgsrc relies heavily on launching subprocesses when building software. NetBSD posix_spawn support was implemented in GSoC 2011, it is included in NetBSD 6.0. Now that NetBSD supports both ways, it would be nice to compare the efficiency of both ways of launching subprocesses and measure its effect when using pkgsrc (both in user and developer mode). In order to accomplish that, the following tools should support posix_spawn:
- devel/bmake
- shells/pdksh
- NetBSD base make
- NetBSD sh
- NetBSD base ksh
- potentially some other tools (e.g. lang/nawk, shells/bash, lang/perl5)
Optionally, MinGW spawn support can be added as well.
Milestones:
- support starting processes and subprocesses by posix_spawn in devel/bmake
- support starting processes and subprocesses by posix_spawn in shells/pdksh,
- measure its efficiency and compare it to traditional fork+exec.
This project proposal is a subtask of smp networking.
The goal of this project is to implement lockless and atomic FIFO/LIFO queues in the kernel. The routines to be implemented allow for commonly typed items to be locklessly inserted at either the head or tail of a queue for either last-in, first-out (LIFO) or first-in, first-out (FIFO) behavior, respectively. However, a queue is not instrinsicly LIFO or FIFO. Its behavior is determined solely by which method each item was pushed onto the queue.
It is only possible for an item to removed from the head of queue. This removal is also performed in a lockless manner.
All items in the queue must share a atomic_queue_link_t
member at the
same offset from the beginning of item. This offset is passed to
atomic_qinit
.
The proposed interface looks like this:
void atomic_qinit(atomic_queue_t *q, size_t offset);
Initializes the atomic_queue_t queue at
q
.offset
is the offset to theatomic_queue_link_t
inside the data structure where the pointer to the next item in this queue will be placed. It should be obtained usingoffsetof
.void *atomic_qpeek(atomic_queue_t *q);
Returns a pointer to the item at the head of the supplied queue
q
. If there was no item because the queue was empty,NULL
is returned. No item is removed from the queue. Given this is an unlocked operation, it should only be used as a hint as whether the queue is empty or not.void *atomic_qpop(atomic_queue_t *q);
Removes the item (if present) at the head of the supplied queue
q
and returns a pointer to it. If there was no item to remove because the queue was empty,NULL
is returned. Because this routine uses atomic Compare-And-Store operations, the returned item should stay accessible for some indeterminate time so that other interrupted or concurrent callers to this function with thisq
can continue to deference it without trapping.void atomic_qpush_fifo(atomic_queue_t *q, void *item);
Places
item
at the tail of theatomic_queue_t
queue atq
.void atomic_qpush_lifo(atomic_queue_t *q, void *item);
Places
item
at the head of theatomic_queue_t
queue atq
.
This project proposal is a subtask of smp networking.
The goal of this project is to implement lockless, atomic and generic Radix and Patricia trees. BSD systems have always used a radix tree for their routing tables. However, the radix tree implementation is showing its age. Its lack of flexibility (it is only suitable for use in a routing table) and overhead of use (requires memory allocation/deallocation for insertions and removals) make replacing it with something better tuned to today's processors a necessity.
Since a radix tree branches on bit differences, finding these bit differences efficiently is crucial to the speed of tree operations. This is most quickly done by XORing the key and the tree node's value together and then counting the number of leading zeroes in the result of the XOR. Many processors today (ARM, PowerPC) have instructions that can count the number of leading zeroes in a 32 bit word (and even a 64 bit word). Even those that do not can use a simple constant time routine to count them:
int
clz(unsigned int bits)
{
int zeroes = 0;
if (bits == 0)
return 32;
if (bits & 0xffff0000) bits &= 0xffff0000; else zeroes += 16;
if (bits & 0xff00ff00) bits &= 0xff00ff00; else zeroes += 8;
if (bits & 0xf0f0f0f0) bits &= 0xf0f0f0f0; else zeroes += 4;
if (bits & 0xcccccccc) bits &= 0xcccccccc; else zeroes += 2;
if (bits & 0xaaaaaaaa) bits &= 0xaaaaaaaa; else zeroes += 1;
return zeroes;
}
The existing BSD radix tree implementation does not use this method but instead uses a far more expensive method of comparision. Adapting the existing implementation to do the above is actually more expensive than writing a new implementation.
The primary requirements for the new radix tree are:
Be self-contained. It cannot require additional memory other than what is used in its data structures.
Be generic. A radix tree has uses outside networking.
To make the radix tree flexible, all knowledge of how keys are represented
has to be encapsulated into a pt_tree_ops_t
structure with these
functions:
bool ptto_matchnode(const void *foo, const void *bar, pt_bitoff_t max_bitoff, pt_bitoff_t *bitoffp, pt_slot_t *slotp);
Returns true if both
foo
andbar
objects have the identical string of bits starting at*bitoffp
and ending beforemax_bitoff
. In addition to returning true,*bitoffp
should be set to the smaller ofmax_bitoff
or the length, in bits, of the compared bit strings. Any bits before*bitoffp
are to be ignored. If the string of bits are not identical,*bitoffp
is set to the where the bit difference occured,*slotp
is the value of that bit infoo
, and false is returned. Thefoo
andbar
(if notNULL
) arguments are pointers to a key member inside a tree object. If bar isNULL
, then assume it points to a key consisting of entirely of zero bits.bool ptto_matchkey(const void *key, const void *node_key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);
Returns true if both
key
andnode_key
objects have identical strings ofbitlen
bits starting atbitoff
. Thekey
argument is the same key argument supplied toptree_find_filtered_node
.pt_slot_t ptto_testnode(const void *node_key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);
Returns
bitlen
bits starting atbitoff
fromnode_key
. Thenode_key
argument is a pointer to the key members inside a tree object.pt_slot_t ptto_testkey(const void *key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);
Returns
bitlen
bits starting atbitoff
from key. Thekey
argument is the same key argument supplied toptree_find_filtered_node
.
All bit offsets are relative to the most significant bit of the key,
The ptree programming interface should contains these routines:
void ptree_init(pt_tree_t *pt, const pt_tree_ops_t *ops, size_t ptnode_offset, size_t key_offset);
Initializes a ptree. If
pt
points at an existing ptree, all knowledge of that ptree is lost. Thept
argument is a pointer to thept_tree_t
to be initialized. Theops
argument is a pointer to thept_tree_ops_t
used by the ptree. This has four members: Theptnode_offset
argument contains the offset from the beginning of an item to itspt_node_t
member. Thekey_offset
argument contains the offset from the beginning of an item to its key data. This is used if 0 is used, a pointer to the beginning of the item will be generated.void *ptree_find_filtered_node(pt_tree_t *pt, const void *key, pt_filter_t filter, void *filter_ctx);
The filter argument is either
NULL
or a functionbool (*)(const void *, void *, int);
bool ptree_insert_mask_node(pt_tree_t *pt, void *item, pt_bitlen_t masklen);
bool ptree_insert_node(pt_tree_t *pt, void *item);
void *ptree_iterate(pt_tree_t *pt, const void *node, pt_direction_t direction);
void ptree_remove_node(pt_tree_t *pt, const pt_tree_ops_t *ops, void *item);
This project proposal is a subtask of smp networking.
The goal of this project is to enhance the networking protocols to process incoming packets more efficiently. The basic idea is the following: when a packet is received and it is destined for a socket, simply place the packet in the socket's receive PCQ (see atomic pcq) and wake the blocking socket. Then, the protocol is able to process the next packet.
The typical packet flow from ip_input
is to {rip,tcp,udp}_input
which:
- Does the lookup to locate the socket which takes a reader lock on the appropriate pcbtable's hash bucket.
- If found and in the proper state:
- Do not lock the socket since that would might block and therefore stop packet demultiplexing.
pcq_put
the packet to the pcb's pcq.kcont_schedule
the worker continuation with small delay (~100ms). See kernel continuations.- Lock the socket's
cvmutex
. - Release the pcbtable lock.
- If TCP and in sequence, then if we need to send an immediate ACK:
- Try to lock the socket.
- If successful, send an ACK.
- Set a flag to process the PCQ.
cv_signal
the socket's cv.- Release the cvmutex.
- If not found or not in the proper state:
- Release the pcb hash table lock.
This project proposal is a subtask of smp networking.
The goal of this project is to implement interrupt handling at the
granularity of a networking interface. When a network device gets an
interrupt, it could call <iftype>_defer(ifp)
to schedule a kernel
continuation (see kernel continuations) for that interface which could
then invoke <iftype>_poll
. Whether the interrupted source should be
masked depends on if the device is a DMA device or a PIO device. This
routine should then call (*ifp->if_poll)(ifp)
to deal with the
interrupt's servicing.
During servicing, any received packets should be passed up via
(*ifp->if_input)(ifp, m)
which would be responsible for ALTQ or any other
optional processing as well as protocol dispatch. Protocol dispatch in
<iftype>_input
decodes the datalink headers, if needed, via a table
lookup and call the matching protocol's pr_input
to process the packet.
As such, interrupt queues (e.g. ipintrq
) would no longer be needed. Any
transmitted packets can be processed as can MII events. Either true or
false should be returned by if_poll
depending on whether another
invokation of <iftype>_poll
for this interface should be immediately
scheduled or not, respectively.
Memory allocation has to be prohibited in the interrupt routines. The
device's if_poll
routine should pre-allocate enough mbufs to do any
required buffering. For devices doing DMA, the buffers are placed into
receive descripors to be filled via DMA.
For devices doing PIO, pre-allocated mbufs are enqueued onto the softc of
the device so when the interrupt routine needs one it simply dequeues one,
fills in it in, and then enqueues it onto a completed queue, finally calls
<iftype>_defer
. If the number of pre-allocated mbufs drops below a
threshold, the driver may decide to increase the number of mbufs that
if_poll
pre-allocates. If there are no mbufs left to receive the packet,
the packets is dropped and the number of mbufs for if_poll
to
pre-allocate should be increased.
When interrupts are unmasked depends on a few things. If the device is interrupting "too often", it might make sense for the device's interrupts to remain masked and just schedule the device's continuation for the next clock tick. This assumes the system has a high enough value set for HZ.
This project proposal is a subtask of smp networking.
The goal of this project is to implement continuations at the kernel level. Most of the pieces are already available in the kernel, so this can be reworded as: combine callouts, softints, and workqueues into a single framework. Continuations are meant to be cheap; very cheap.
These continuations are a dispatching system for making callbacks at scheduled times or in different thread/interrupt contexts. They aren't "continuations" in the usual sense such as you might find in Scheme code.
Please note that the main goal of this project is to simplify the implementation of SMP networking, so care must be taken in the design of the interface to support all the features required for this other project.
The proposed interface looks like the following. This interface is mostly
derived from the callout(9)
API and is a superset of the softint(9) API.
The most significant change is that workqueue items are not tied to a
specific kernel thread.
kcont_t *kcont_create(kcont_wq_t *wq, kmutex_t *lock, void (*func)(void *, kcont_t *), void *arg, int flags);
A
wq
must be supplied. It may be one returned bykcont_workqueue_acquire
or a predefined workqueue such as (sorted from highest priority to lowest):wq_softserial
,wq_softnet
,wq_softbio
,wq_softclock
wq_prihigh
,wq_primedhigh
,wq_primedlow
,wq_prilow
lock
, if non-NULL, should be locked before callingfunc(arg)
and released afterwards. However, if the lock is released and/or destroyed before the called function returns, then, before returning,kcont_set_mutex
must be called with either a new mutex to be released orNULL
. If acquiring lock would block, other pending kernel continuations which depend on other locks may be dispatched in the meantime. However, all continuations sharing the same set of{ wq, lock, [ci] }
need to be processed in the order they were scheduled.flags
must be 0. This field is just provided for extensibility.int kcont_schedule(kcont_t *kc, struct cpu_info *ci, int nticks);
If the continuation is marked as INVOKING, an error of
EBUSY
should be returned. Ifnticks
is 0, the continuation is marked as INVOKING while EXPIRED and PENDING are cleared, and the continuation is scheduled to be invoked without delay. Otherwise, the continuation is marked as PENDING while EXPIRED status is cleared, and the timer reset tonticks
. Once the timer expires, the continuation is marked as EXPIRED and INVOKING, and the PENDING status is cleared. Ifci
is non-NULL, the continuation is invoked on the specified CPU if the continuations's workqueue has per-cpu queues. If that workqueue does not provide per-cpu queues, an error ofENOENT
is returned. Otherwise whenci
isNULL
, the continuation is invoked on either the current CPU or the next available CPU depending on whether the continuation's workqueue has per-cpu queues or not, respectively.void kcont_destroy(kcont_t *kc);
kmutex_t *kcont_getmutex(kcont_t *kc);
Returns the lock currently associated with the continuation
kc
.void kcont_setarg(kcont_t *kc, void *arg);
Updates
arg
in the continuationkc
. If no lock is associated with the continuation, thenarg
may be changed at any time; however, if the continuation is being invoked, it may not pick up the change. Otherwise,kcont_setarg
must only be called when the associated lock is locked.kmutex_t *kcont_setmutex(kcont_t *kc, kmutex_t *lock);
Updates the lock associated with the continuation
kc
and returns the previous lock. If no lock is currently associated with the continuation, then calling this function with a lock other than NULL will trigger an assertion failure. Otherwise,kcont_setmutex
must be called only when the existing lock (which will be replaced) is locked. Ifkcont_setmutex
is called as a result of the invokation of func, then after kcont_setmutex has been called but before func returns, the replaced lock must have been released, and the replacement lock, if non-NULL, must be locked upon return.void kcont_setfunc(kcont_t *kc, void (*func)(void *), void *arg);
Updates
func
andarg
in the continuationkc
. If no lock is associated with the continuation, then only arg may be changed. Otherwise,kcont_setfunc
must be called only when the associated lock is locked.bool kcont_stop(kcont_t *kc);
The
kcont_stop function
stops the timer associated the continuation handle kc. The PENDING and EXPIRED status for the continuation handle is cleared. It is safe to callkcont_stop
on a continuation handle that is not pending, so long as it is initialized.kcont_stop
will return a non-zero value if the continuation was EXPIRED.bool kcont_pending(kcont_t *kc);
The
kcont_pending
function tests the PENDING status of the continuation handlekc
. A PENDING continuation is one who's timer has been started and has not expired. Note that it is possible for a continuation's timer to have expired without being invoked if the continuation's lock could not be acquired or there are higher priority threads preventing its invokation. Note that it is only safe to test PENDING status when holding the continuation's lock.bool kcont_expired(kcont_t *kc);
Tests to see if the continuation's function has been invoked since the last
kcont_schedule
.bool kcont_active(kcont_t *kc);
bool kcont_invoking(kcont_t *kc);
Tests the INVOKING status of the handle
kc
. This flag is set just before a continuation's function is being called. Since the scheduling of the worker threads may induce delays, other pending higher-priority code may run before the continuation function is allowed to run. This may create a race condition if this higher-priority code deallocates storage containing one or more continuation structures whose continuation functions are about to be run. In such cases, one technique to prevent references to deallocated storage would be to test whether any continuation functions are in the INVOKING state usingkcont_invoking
, and if so, to mark the data structure and defer storage deallocation until the continuation function is allowed to run. For this handshake protocol to work, the continuation function will have to use thekcont_ack
function to clear this flag.bool kcont_ack(kcont_t *kc);
Clears the INVOKING state in the continuation handle
kc
. This is used in situations where it is necessary to protect against the race condition described underkcont_invoking
.kcont_wq_t *kcont_workqueue_acquire(pri_t pri, int flags);
Returns a workqueue that matches the specified criteria. Thus if multiple requesters ask for the same criteria, they are all returned the same workqueue.
pri
specifies the priority at which the kernel thread which empties the workqueue should run.If
flags
is 0 then the standard operation is required. However, the following flag(s) may be bitwise ORed together:WQ_PERCPU
specifies that the workqueue should have a separate queue for each CPU, thus allowing continuations to be invoked on specific CPUs.
int kcont_workqueue_release(kcont_wq_t *wq);
Releases an acquired workqueue. On the last release, the workqueue's resources are freed and the workqueue is destroyed.
This project proposal is a subtask of smp networking.
The goal of this project is to improve the way the processing of incoming packets is handled.
Instead of having a set of active workqueue lwps waiting to service sockets, the kernel should use the lwp that is blocked on the socket to service the workitem. It is not productive being blocked and it has an interest in getting that workitem done, and maybe we can directly copy that data to user's address and avoid queuing in the socket at all.
This project proposal is a subtask of smp networking.
The goal of this project is to remove the ARP, AARP, ISO SNPA, and IPv6
Neighbors from the routing table. Instead, the ifnet
structure should
have a set of nexthop caches (usually implemented using
patricia trees), one per address family.
Each nexthop entry should contain the datalink header needed to reach the
neighbor.
This will remove cloneable routes from the routing table and remove the need to maintain protocol-specific code in the common Ethernet, FDDI, PPP, etc. code and put it back where it belongs, in the protocol itself.
This project proposal is a subtask of smp networking.
The goal of this project is to make the SYN cache optional. For small systems, this is complete overkill and should be made optional.
This project proposal is a subtask of smp networking and is elegible for funding independently.
The goal of this project is to implement full virtual network stacks. A
virtual network stack collects all the global data for an instance of a
network stack (excluding AF_LOCAL
). This includes routing table, data
for multiple domains and their protocols, and the mutexes needed for
regulating access to it all. Instead, a brane is an instance of a
networking stack.
An interface belongs to a brane, as do processes. This can be considered
a chroot(2)
for networking, e.g. chbrane(2)
.
- Contact: tech-net
Design and program a scheme for extending the operating range of 802.11 networks by using techniques like frame combining and error-correcting codes to cope with low S/(N+I) ratio. Implement your scheme in one or two WLAN device drivers -- Atheros & Realtek, say.
This project is on hold due to the conversion project needing to be completed first.
- Contact: tech-net
Modern 802.11 NICs provide two or more transmit descriptor rings, one for each priority level or 802.11e access category. Add to NetBSD a generic facility for placing a packet onto a different hardware transmit queue according to its classification by pf or IP Filter. Demonstrate this facility on more than one 802.11 chipset.
This project is on hold due to the conversion project needing to be completed first.
- Contact: tech-toolchain
- Duration estimate: 2 months
BSD make (aka bmake) uses traditional suffix rules (.c.o: ...) instead of pattern rules like gmake's (%.c:%.o: ...) which are more general and flexible.
The suffix module should be re-written to work from a general match-and-transform function on targets, which is sufficient to implement not only traditional suffix rules and gmake pattern rules, but also whatever other more general transformation rule syntax comes down the pike next. Then the suffix rule syntax and pattern rule syntax can both be fed into this code.
Note that it is already possible to write rules where the source is computed explicitly based on the target, simply by using $(.TARGET) in the right hand side of the rule. Investigate whether this logic should be rewritten using the match-and-transform code, or if the match-and-transform code should use the logic that makes these rules possible instead.
Implementing pattern rules is widely desired in order to be able to read more makefiles written for gmake, even though gmake's pattern rules aren't well designed or particularly principled.
- Contact: tech-kern
- Duration estimate: 2-3 months
Currently the buffer handling logic only sorts the buffer queue (aka disksort). In an ideal world it would be able to coalesce adjacent small requests, as this can produce considerable speedups. It might also be worthwhile to split large requests into smaller chunks on the fly as needed by hardware or lower-level software.
Note that the latter interacts nontrivially with the ongoing dynamic MAXPHYS project and might not be worthwhile. Coalescing adjacent small requests (up to some potentially arbitrary MAXPHYS limit) is worthwhile regardless, though.
- Contact: tech-embed
- Duration estimate: 2 months
NetBSD version of compressed cache system (for low-memory devices): http://linuxcompressed.sourceforge.net/.
- Contact: tech-kern
In a file system with symlinks, the file system can be seen as a graph rather than a tree. The meaning of .. potentially becomes complicated in this environment.
There is a fairly substantial group of people, some of them big famous names, who think that the usual behavior (where crossing a symlink is different from entering a subdirectory) is a bug, and have made various efforts from time to time to "fix" it. One such fix can be seen in the -L and -P options to ksh's pwd.
Rob Pike implemented a neat hack for this in Plan 9. It is described in http://cm.bell-labs.com/sys/doc/lexnames.html. This project is to implement that logic for NetBSD.
Note however that there's another fairly substantial group of people, some of them also big famous names, who think that all of this is a load of dingo's kidneys, the existing behavior is correct, and changing it would be a bug. So it needs to be possible to switch the implementation on and off as per-process state.
- Contact: tech-kern
- Duration estimate: 2-3 months
The ext2 file system is the lowest common denominator Unix-like file system in the Linux world, as ffs is in the BSD world. NetBSD has had kernel support for ext2 for quite some time.
However, the Linux world has moved on, with ext3 and now to some extent also ext4 superseding ext2 as the baseline. NetBSD has no support for ext3; the goal of this project is to implement that support.
Since ext3 is a backward-compatible extension that adds journaling to ext2, NetBSD can mount clean ext3 volumes as ext2 volumes. However, NetBSD cannot mount ext3 volumes with journaling and it cannot handle recovery for crashed volumes. As ext2 by itself provides no crash recovery guarantees whatsoever, this journaling support is highly desirable.
The ext3 support should be implemented by extending the existing ext2 support (which is in src/sys/ufs/ext2fs), not by rewriting the whole thing over from scratch. It is possible that some of the ostensibly filesystem-independent code that was added along with the ffs WAPBL journaling extensions might be also useable as part of an ext3 implementation; but it also might not be.
The full requirements for this project include complete support for ext3 in both the kernel and the userland tools. It is possible that a reduced version of this project with a clearly defined subset of these requirements could still be a viable GSOC project; if this appeals to you please coordinate with a prospective mentor. Be advised, however, that several past ext3-related GSOC projects have failed; it is a harder undertaking than you might think.
An additional useful add-on goal would be to audit the locking in the existing ext2 code; the ext2 code is not tagged MPSAFE, meaning it uses a biglock on multiprocessor machines, but it is likely that either it is in fact already safe and just needs to be tagged, or can be easily fixed. (Note that while this is not itself directly related to implementing ext3, auditing the existing ext2 code is a good way to become familiar with it.)
- Contact: tech-kern, tech-embed
Implement a flash translation layer.
A flash translation layer does block remapping, translating from visible block addresses used by a file system to physical cells on one or more flash chips. This provides wear leveling, which is essential for effective use of flash, and also typically some amount of read caching and write buffering. (And it takes care of excluding cells that have gone bad.)
This allows FFS, LFS, msdosfs, or whatever other conventional file system to be used on raw flash chips. (Note that SSDs and USB flash drives and so forth contain their own FTLs.)
FTLs involve quite a bit of voodoo and there is a lot of prior art and research; do not just sit down and start coding.
There are also some research FTLs that we might be able to get the code for; it is probably worth looking into this.
Note that NAND flash and NOR flash are different and need different handling, and the various cell types and other variations also warrant different policy choices.
The degree of overprovisioning (that is, the ratio of the raw capacity of the flash chips to the advertised size of the resulting device) should be configurable as this is a critical factor for performance.
Making the device recoverable rather than munching itself in system crashes or power failures is a nice extra, although apparently the market considers this an optional feature for consumer devices.
The flash translation layer should probably be packaged a driver that attaches to one or more flash chips and provides a disk-type block/character device pair.
- Contact: tech-userlevel
Use puffs or refuse to write an imapfs that you can mount on /var/mail, either by writing a new one or porting the old existing Plan 9 code that does this.
Note: there might be existing solutions, please check upfront and let us know.
- Contact: tech-kern
- Duration estimate: 4 months and up
There are many caches in the kernel. Most of these have knobs and adjustments, some exposed and some not, for sizing and writeback rate and flush behavior and assorted other voodoo, and most of the ones that aren't adjustable probably should be.
Currently all or nearly all of these caches operate on autopilot independent of the others, which does not necessarily produce good results, especially if the system is operating in a performance regime different from when the behavior was tuned by the implementors.
It would be nice if all these caches were instead coordinated, so that they don't end up fighting with one another. Integrated control of sizing, for example, would allow explicitly maintaining a sensible balance between different memory uses based on current conditions; right now you might get that, depending on whether the available voodoo happens to work adequately under the workload you have, or you might not. Also, it is probably possible to define some simple rules about eviction, like not evicting vnodes that have UVM pages still to be written out, that can help avoid unnecessary thrashing and other adverse dynamic behavior. And similarly, it is probably possible to prefetch some caches based on activity in others. It might even be possible to come up with one glorious unified cache management algorithm.
Also note that cache eviction and prefetching is fundamentally a form of scheduling, so all of this material should also be integrated with the process scheduler to allow it to make more informed decisions.
This is a nontrivial undertaking.
Step 1 is to just find all the things in the kernel that ought to participate in a coordinated caching and scheduling scheme. This should not take all that long. Some examples include:
- UVM pages
- file system metadata buffers
- VFS name cache
- vnode cache
- size of the mbuf pool
Step 2 is to restructure and connect things up so that it is readily possible to get the necessary information from all the random places in the kernel that these things occupy, without making a horrible mess and without trashing system performance in the process or deadlocking out the wazoo. This is not going to be particularly easy or fast.
Step 3 is to take some simple steps, like suggested above, to do something useful with the coordinated information, and hopefully to show via benchmarks that it has some benefit.
Step 4 is to look into more elaborate algorithms for unified control of everything. The previous version of this project cited IBM's ARC ("Adaptive Replacement Cache") as one thing to look at. (But note that ARC may be encumbered -- someone please check on that and update this page.) Another possibility is to deploy machine learning algorithms to look for and exploit patterns.
Note: this is a serious research project. Step 3 will yield a publishable minor paper; step 4 will yield a publishable major paper if you manage to come up with something that works, and it quite possibly contains enough material for a PhD thesis.
- Contact: tech-kern
Add support for Apple's extensions to ISO9660 to makefs, especially the ability to label files with Type & Creator IDs. See http://developer.apple.com/technotes/fl/fl_36.html.
- Contact: tech-kern
- Duration estimate: 1 year for port; 3 years for rewrite by one developer
Implement a BSD licensed JFS. A GPL licensed implementation of JFS is available at http://jfs.sourceforge.net/.
Alternatively, or additionally, it might be worthwhile to do a port of the GPL code and allow it to run as a kernel module.
- Contact: tech-kern
- Mentors: Jean-Yves Migeon
Today a number of OS provide some form of kernel-level virtualization that offer better isolation mechanisms that the traditional (yet more portable) &chroot(2). Currently, NetBSD lacks functionality in this field; there have been multiple attempts (gaols, mult) to implement a jails-like system, but none so far has been integrated in base.
The purpose of this project is to study the various implementations found elsewhere (FreeBSD Jails, Solaris Zones, Linux Containers/VServers, ...), and eventually see their plus/minus points. An additional step would be to see how this can be implemented the various architectural improvements NetBSD gained, especially rump(3) and kauth(9).
Caution: this is a research project.
- Contact: tech-kern, David Holland
- Duration estimate: 2-3 months
kernfs is a virtual file system that reports information about the running system, and in some cases allows adjusting this information. procfs is a virtual file system that provides information about currently running processes. Both of these file systems work by exposing virtual files containing textual data.
The current implementations of these file systems are redundant and both are non-extensible. For example, kernfs is a hardcoded table that always exposes the same set of files; there is no way to add or remove entries on the fly, and even adding new static entries is a nuisance. procfs is similarly limited; there is no way to add additional per-process data on the fly. Furthermore, the current code is not modular, not well designed, and has been a source of security bugs in the past.
We would like to have a new implementation for both of these file systems that rectifies these problems and others, as outlined below:
kernfs and procfs should share most of their code, and in particular they should share all the code for managing lists of virtual files. They should remain separate entities, however, at least from the user perspective: community consensus is that mixing system and per-process data, as Linux always has, is ugly.
It should be possible to add and remove entries on the fly, e.g. as modules are loaded and unloaded.
Because userlevel programs can become dependent on the format of the virtual files (Linux has historically had compatibility problems because of this) they should if possible not have complex formats at all, and if they do the format should be clearly specifiable in some way that isn't procedural code. (This makes it easier to reason about, and harder for it to get changed by accident.)
There is an additional interface in the kernel for retrieving and adjusting arbitrary kernel information: sysctl. Currently the sysctl code is a third completely separate mechanism, on many points redundant with kernfs and/or procfs. It is somewhat less primitive, but the current implementation is cumbersome and not especially liked. Integrating kernfs and procfs with sysctl (perhaps somewhat like the Linux sysfs) is not automatically the right design choice, but it is likely to be a good idea. At a minimum we would like to be able to have one way to handle reportable/adjustable data within the kernel, so that kernfs, procfs, and/or sysctl can be attached to any particular data element as desired.
While most of the implementations of things like procfs and sysctl found in the wild (including the ones we currently have) work by attaching callbacks, and then writing code all over the kernel to implement the callback API, it is possible to design instead to attach data, that is, pointers to variables within the kernel, so that the kernfs/procfs or sysctl code itself takes responsibility for fetching that data. Please consider such a design strongly and pursue it if feasible, as it is much tidier. (Note that attaching data also in general requires specifying a locking model and, for writeable data, possibly a condition variable to signal on when the value changes and/or a mechanism for checking new values for validity.)
It is possible that using tmpfs as a backend for kernfs and procfs, or sharing some code with tmpfs, would simplify the implementation. It also might not. Consider this possibility, and assess the tradeoffs; do not treat it as a requirement.
Alternatively, investigate FreeBSD's pseudofs and see if this could be a useful platform for this project and base for all the file systems mentioned above.
When working on this project, it is very important to write a complete regression test suite for procfs and kernfs beforehand to ensure that the rewrites do not create incompatibilities.
- Contact: tech-net
Improve on the Kismet design and implementation in a Kismet replacement for BSD.
- Contact: tech-pkg, port-xen
- Mentors: Jean-Yves Migeon
- Duration estimate: 1-3 weeks, depending on targetted operating system
Libvirt is a project that aims at bringing yet-another-level of abstraction to the management of different virtualization technologies; it supports a wide range of virtualization technologies, like Xen, VMWare, KVM and containers.
A package for libvirt was added to pkgsrc under sysutils/libvirt, however it requires more testing before all platforms supported by pkgsrc can also seamlessly support libvirt.
The purpose of this project is to investigate what is missing in libvirt (in terms of patches or system integration) so it can work out-of-the-box for platforms that can benefit from it. GNU/Linux, NetBSD, FreeBSD and Solaris are the main targets.
- Contact: netbsd-users, tech-install
While NetBSD has had LiveCDs for a while, there has not yet been a LiveCD that allows users to install NetBSD after test-driving it. A LiveCD that contains a GUI based installer and reliably detects the platforms features would be very useful.
- Contact: tech-userlevel
Apply statistical AI techniques to the problem of monitoring the logs of a busy system. Can one identify events of interest to a sysadmin, or events that merit closer inspection? Failing that, can one at least identify some events as routine and provide a filtered log that excludes them? Also, can one group a collection of related messages together into a single event?
- Contact: tech-misc, tech-ports
- Duration estimate: 4 months
NetBSD currently requires a system with an MMU. This obviously limits the portability. We'd be interested in an implementation/port of NetBSD on/to an MMU-less system.
- Contact: tech-kern
The policy code in the kernel that controls file caching and readahead behavior is necessarily one-size-fits-all, and the knobs available to applications to tune it, like madvise() and posix_fadvise(), are fairly blunt hammers. Furthermore, it has been shown that the overhead from user<->kernel domain crossings makes syscall-driven fine-grained policy control ineffective. (Though, that was shown in the past when processors were much slower relative to disks and it may not be true any more.)
Is it possible to use BPF, or create a different BPF-like tool (that is, a small code generator with very simple and very clear safety properties) to allow safe in-kernel fine-grained policy control?
Caution: this is a research project.
- Contact: tech-net
- Duration estimate: 3 months
Implement the ability to route based on properties like QoS label, source address, etc.
- Contact: tech-net
- Duration estimate: 2 months
Write tests for the routing code and re-factor. Use more and better-named variables.
PCBs and other structures are sprinkled with route caches (struct route). Sometimes they do not get updated when they should. It's necessary to modify rtalloc(), at least. Fix that. Note XXX in rtalloc(); this may mark a potential memory leak!
- Contact: tech-net
Create/modify a 802.11 link-adaptation module that works at least as well as SampleRate, but is generic enough to be re-used by device drivers for ADMtek, Atheros, Intel, Ralink, and Realtek 802.11 chips. Make the module export a link-quality metric (such as ETT) suitable for use in linkstate routing. The SampleRate module in the Atheros device driver sources may be a suitable starting point.
- Contact: port-mips
Currently booting a sgimips machine requires different boot commands depending on the architecture. It is not possible to use the firmware menu to boot from CD.
An improved primary bootstrap should ask the firmware for architecture detail, and automatically boot the correct kernel for the current architecture by default.
A secondary objective of this project would be to rearrange the generation of a bootably CD image so that it could just be loaded from the firmware menu without going through the command monitor.
- Contact: port-mips
NetBSD/sgimips currently runs on a number of SGI hardware, but support for IP27 (Origin) and IP30 (Octane) is not yet available.
See also NetBSD/sgimips.
- Contact: port-mips
NetBSD/sgimips currently runs on O2s with R10k (or similar) CPUs, but for example speculative loads are not handled correctly. It is unclear if this is pure kernel work or the toolchain needs to be changed too.
Currently softfloat is used, and bugs seem to exist in the hardware float support. Resolving these bugs and switching to hardware float would improve performance.
See also NetBSD/sgimips.
- Contact: tech-kern
Certain real time chips, and other related power hardware, have a facility within them to allow the kernel to set a specific time and date at which time the machine will power itself on. One such chip is the DS1685 RTC. A kernel API should be developed to allow such devices to have a power-on-time set from userland. Additionally, the API should be made available through a userland program, or added to an existing utility, such as halt(8).
It may also be useful to make this a more generic interface, to allow for configuring other devices, such as Wake-On-Lan ethernet chips, to turn them on/off, set them up, etc.
- Contact: tech-kern
- Duration estimate: 8-12 months
Remove the residual geometry code and datastructures from FFS (keep some kind of allocation groups but without most of what cylinder groups now have) and replace blocks and fragments with extents, yielding a much simpler filesystem well suited for modern disks.
Note that this results in a different on-disk format and will need to be a different file system type.
The result would be potentially useful to other operating systems beyond just NetBSD, since UFS/FFS is used in so many different kernels.
- Contact: port-sparc, tech-ports
It would be nice to support these newer highly SMP processors from Sun. A Linux port already exists, and Sun has contributed code to the FOSS community.
(Some work has already been done and committed - see https://wiki.netbsd.org/ports/sparc64/sparc64sun4v/
- Contact: tech-install, tech-misc
syspkgs is the concept of using pkgsrc's pkg_* tools to maintain the base system. That is, allow users to register and components of the base system with more ease.
There has been a lot of work in this area already, but it has not yet been finalized. Which is a diplomatic way of saying that this project has been attempted repeatedly and failed every time.
- Contact: tech-kern
Add memory-efficient snapshots to tmpfs. A snapshot is a view of the filesystem, frozen at a particular point in time. The snapshotted filesystem is not frozen, only the view is. That is, you can continue to read/write/create/delete files in the snapshotted filesystem.
The interface to snapshots may resemble the interface to null mounts, e.g., 'mount -t snapshot /var/db /db-snapshot' makes a snapshot of /var/db/ at /db-snapshot/.
You should exploit features of the virtual memory system like copy-on-write memory pages to lazily make copies of files that appear both in a live tmpfs and a snapshot. This will help conserve memory.
- Contact: tech-userlevel
While we now have mandoc for handling man pages, we currently still need groff in the tree to handle miscellaneous docs that are not man pages.
This is itself an inadequate solution as the groff we have does not support PDF output (which in this day and age is highly desirable) ... and while newer groff does support PDF output it does so via a Perl script. Also, importing a newer groff is problematic for assorted other reasons.
We need a way to typeset miscellaneous articles that we can import into base and that ideally is BSD licensed. (And that can produce PDFs.) Currently it looks like there are three decent ways forward:
Design a new roff macro package that's comparable to mdoc (e.g. supports semantic markup) but is for miscellaneous articles rather than man pages, then teach mandoc to handle it.
Design a new set of markup tags comparable to mdoc (e.g. supports semantic markup) but for miscellaneous articles, and a different less ratty syntax for it, then teach mandoc to handle this.
Design a new set of markup tags comparable to mdoc (e.g. supports semantic markup) but for miscellaneous articles, and a different less ratty syntax for it, and write a new program akin to mandoc to handle it.
These are all difficult and a lot of work, and in the case of new syntax are bound to cause a lot of shouting and stamping. Also, many of the miscellaneous documents use various roff preprocessors and it isn't clear how much of this mandoc can handle.
None of these options is particularly appealing.
There are also some less decent ways forward:
Pick one of the existing roff macro packages for miscellaneous articles (ms, me, ...) and teach mandoc to handle it. Unfortunately all of these macro packages are pretty ratty, they're underpowered compared to mdoc, and none of them support semantic markup.
Track down one of the other older roff implementations, that are now probably more or less free (e.g. ditroff), then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, any of these programs are likely to be old nasty code that needs a lot of work.
Teach the groff we have how to emit PDFs, then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, this will likely be pretty nasty work and it's still got the wrong license.
Rewrite groff as BSD-licensed code and provide support for generating PDFs, then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, this is a horrific amount of work.
Try to make something else do what we want. Unfortunately, TeX is a nonstarter and the only other halfway realistic candidate is lout... which is GPLv3 and at least at casual inspection looks like a horrible mess of its own.
These options are even less appealing.
Maybe someone can think of a better idea. There are lots of choices if we give up on typeset output, but that doesn't seem like a good plan either.
- Contact: tech-userlevel
- Duration estimate: 1-2 months
Due to the multitude of supported machine architectures NetBSD has to deal with many different partitioning schemes. To deal with them in a uniform way (without imposing artificial restrictions that are not enforced by the underlying firmware or bootloader partitioning scheme) wedges have been designed.
While the kernel part of wedges is mostly done (and missing parts are easy to add), a userland tool to edit wedges and to synthesize defaults from (machine/arch dependent) on-disk content is needed.
- Contact: tech-net, tech-userlevel
- Duration estimate: 1 month
Create an easy to use wifi setup widget for NetBSD: browse and select networks in the vicinity by SSID, BSSID, channel, etc.
The guts should probably be done as a library so that user interfaces of increasing slickness can be built on top of it as desired. (That is: there ought to be some form of this in base; but a nice looking gtk interface version would be good to have as well.)
- Contact: tech-net
Add socket options to NetBSD for controlling WLAN transmit parameters like transmit power, fragmentation threshold, RTS/CTS threshold, bitrate, 802.11e access category, on a per-socket and per-packet basis. To set transmit parameters, pass radiotap headers using sendmsg(2) and setsockopt(2).
This project is on hold due to the conversion project needing to be completed first.
- Contact: netbsd-docs
The NetBSD website building infrastructure is rather complex and requires significant resources. We need to make it easier for anybody to contribute without having to install a large number of complex applications from pkgsrc or without having to learn the intricacies of the build process.
A more detailed description of the problem is described in this and this email and the following discussion on the netbsd-docs.
This work requires knowledge of XML, XSLT and make. This is not a request for visual redesign of the website.
- Contact: port-amd64, port-i386
- Mentors: Jean-Yves Migeon
- Duration estimate: 6 months
This project is about implementing the needed support for Intel's VT-d and AMD-IOV functionality in the native x86 ports, with a focus on amd64 first (i386 being a nice-to-have, but not strictly required).
NetBSD already has machine-independent bus abstraction layers (namely, bus_space(9) for bus-related memory operations, and bus_dma(9) for DMA related transactions) that are successfully used on other arches like SPARC for IOMMU.
The present project is to implement the machine-dependent functions to support IOMMU on x86.
Please note that it requires specific hardware for testing, as not all motherboards/chipsets have IOMMU supported let alone working correctly. In case of doubt, ask on the mailing list or point of contact.
- Contact: port-xen
- Mentors: Jean-Yves Migeon
- Duration estimate: 3-6 months, depending on subsystem considered
This project's work is composed of smaller components that can be worked on independently from others, all related to the Xen port.
Xen has support of a number of machine-dependent features that NetBSD currently does not implement for the x86's port of the Xen architecture. Notably:
- PCI passthrough, where PCI devices can be exposed to a guest via Xen protected mem/regs mappings;
- IOMMU (Intel's VT-d, or AMD's IOV) that protects memory access from devices for I/O, needed for safe operation of PCI/device passthrough;
- ACPI, and more specifically, ACPI states. Most commonly used on native systems to suspend/resume/shutdown a host;
- CPU and memory hotplugging;
- more elaborate VM debugging through gdbx, a lightweight debugger included with Xen.
The purpose of this project is to either add the missing parts inside NetBSD (some requiring native implementation first like IOMMU), or implement the needed interface to plug current native x86 systems (like pmf(9) for ACPI hypercalls).
- Contact: tech-kern
- Duration estimate: 1 year for port; 3 years for rewrite by one developer
Implement a BSD licensed XFS. A GPL licensed implementation of XFS is available at http://oss.sgi.com/projects/xfs/.
Alternatively, or additionally, it might be worthwhile to do a port of the GPL code and allow it to run as a kernel module.
See also FreeBSD's port.
- Contact: tech-net
Enhance zeroconfd, the Multicast DNS daemon, that was begun in NetBSD's Google Summer of Code 2005 (see work in progress: http://netbsd-soc.sourceforge.net/projects/zeroconf/). Develop a client library that lets a process publish mDNS records and receive asynchronous notification of new mDNS records. Adapt zeroconfd to use event(3) and queue(3). Demonstrate comparable functionality to the GPL or APSL alternatives (Avahi, Howl, ...), but in a smaller storage footprint, with no dependencies outside of the NetBSD base system.
Traditionally, the NetBSD kernel code had been protected by a single, global lock. This lock ensured that, on a multiprocessor system, two different threads of execution did not access the kernel concurrently and thus simplified the internal design of the kernel. However, such design does not scale to multiprocessor machines because, effectively, the kernel is restricted to run on a single processor at any given time.
The NetBSD kernel has been modified to use fine grained locks in many of its different subsystems, achieving good performance on today's multiprocessor machines. Unfotunately, these changes have not yet been applied to the networking code, which remains protected by the single lock. In other words: NetBSD networking has evolved to work in a uniprocessor envionment; switching it to use fine-grained locked is a hard and complex problem.
This project is currently claimed
Funding
At this time, The NetBSD Foundation is accepting project specifications to remove the single networking lock. If you want to apply for this project, please send your proposal to the contact addresses listed above.
Due to the size of this project, your proposal does not need to cover everything to qualify for funding. We have attempted to split the work into smaller units, and you can submit funding applications for these smaller subtasks independently as long as the work you deliver fits in the grand order of this project. For example, you could send an application to make the network interfaces alone MP-friendly (see the work plan below).
What follows is a particular design proposal, extracted from an original text written by Matt Thomas. You may choose to work on this particular proposal or come up with your own.
Tentative specification
The future of NetBSD network infrastructure has to efficiently embrace two major design criteria: Symmetric Multi-Processing (SMP) and modularity. Other design considerations include not only supporting but taking advantage of the capability of newer network devices to do packet classification, payload splitting, and even full connection offload.
You can divide the network infrastructure into 5 major components:
- Interfaces (both real devices and pseudo-devices)
- Socket code
- Protocols
- Routing code
- mbuf code.
Part of the complexity is that, due to the monolithic nature of the kernel, each layer currently feels free to call any other layer. This makes designing a lock hierarchy difficult and likely to fail.
Part of the problem are asynchonous upcalls, among which include:
ifa->ifa_rtrequest
for route changes.pr_ctlinput
for interface events.
Another source of complexity is the large number of global variables scattered throughout the source files. This makes putting locks around them difficult.
Subtasks
The proposed solution presented here include the following tasks (in no particular order) to achieve the desired goals of SMP support and modularity:
- Lockless, atomic FIFO/LIFO queues
- Lockless, atomic and generic Radix/Patricia trees
- Fast protocol and port demultiplexing
- Implement per-interface interrupt handling
- Kernel continuations
- Lazy receive processing
- Separate nexthop cache from the routing table
- Make TCP syncache optional
- Virtual network stacks
Work plan
Aside from the list of tasks above, the work to be done for this project can be achieved by following these steps:
Move ARP out of the routing table. See the nexthop cache project.
Make the network interfaces MP, which are one of the few users of the big kernel lock left. This needs to support multiple receive and transmit queues to help reduce locking contention. This also includes changing more of the common interfaces to do what the
tsec
driver does (basically do everything with softints). This also needs to change the*_input
routines to use a table to do dispatch instead of the current switch code so domain can be dynamically loaded.Collect global variables in the IP/UDP/TCP protocols into structures. This helps the following items.
Make IPV4/ICMP/IGMP/REASS MP-friendly.
Make IPV6/ICMP/IGMP/ND MP-friendly.
Make TCP MP-friendly.
Make UDP MP-friendly.
Radical thoughts
You should also consider the following ideas:
LWPs in user space do not need a kernel stack
Those pages are only being used in case the an exception happens. Interrupts are probably going to their own dedicated stack. One could just keep a set of kernel stacks around. Each CPU has one, when a user exception happens, that stack is assigned to the current LWP and removed as the active CPU one. When that CPU next returns to user space, the kernel stack it was using is saved to be used for the next user exception. The idle lwp would just use the current kernel stack.
LWPs waiting for kernel condition shouldn't need a kernel stack
If an LWP is waiting on a kernel condition variable, it is expecting to be inactive for some time, possibly a long time. During this inactivity, it does not really need a kernel stack.
When the exception handler get an usermode exeception, it sets LWP
restartable flag that indicates that the exception is restartable, and then
services the exception as normal. As routines are called, they can clear
the LWP restartable flag as needed. When an LWP needs to block for a long
time, instead of calling cv_wait
, it could call cv_restart
. If
cv_restart
returned false, the LWPs restartable flag was clear so
cv_restart
acted just like cv_wait
. Otherwise, the LWP and CV would
have been tied together (big hand wave), the lock had been released and the
routine should have returned ERESTART
. cv_restart
could also wait for
a small amount of time like .5 second, and only if the timeout expires.
As the stack unwinds, eventually, it would return to the last the exception
handler. The exception would see the LWP has a bound CV, save the LWP's
user state into the PCB, set the LWP to sleeping, mark the lwp's stack as
idle, and call the scheduler to find more work. When called,
cpu_switchto
would notice the stack is marked idle, and detach it from
the LWP.
When the condition times out or is signalled, the first LWP attached to the
condition variable is marked runnable and detached from the CV. When the
cpu_switchto
routine is called, the it would notice the lack of a stack
so it would grab one, restore the trapframe, and reinvoke the exception
handler.
- Contact: tech-userlevel
- Mentors: David Young
- Duration estimate: 3 months
Refactor utilities in the base system, such as netstat, top, and vmstat, that format & display tables of statistics.
One possible refactoring divides each program into three:
- one program reads the statistics from the kernel and writes them in machine-readable form to its standard output
- a second program reads machine-readable tables from its standard input and writes them in a human-readable format to the standard output
- and a third program supervises a pipeline of the first two programs, browses and refreshes a table.
Several utilities will share the second and third program.
- Contact: tech-kern, tech-userlevel
- Duration estimate: 3 months
Currently, the puffs(3) interface between the kernel and userspace uses various system structures for passing information. Examples are struct stat
and struct uucred
. If these change in layout (such as with the time_t size change in NetBSD 6.0), old puffs servers must be recompiled.
The project milestones are:
- define a binary-independent protocol
- implement support
- measure the performance difference with direct kernel struct passing
- if there is a huge difference, investigate the possibility for having both an internal and external protocol. The actual decision to include support will be made on the relative complexity of the code for dual support.
While this project will be partially implemented in the kernel, it is fairly well-contained and prior kernel experience is not necessary.
If there is time and interest, a suggested subproject is making sure that p2k(3) does not suffer from similar issues. This is a required subproject if dual support as mentioned above is not necessary.
- Contact: tech-userlevel
- Mentors: David Young
- Duration estimate: 3 months
Write a script that aids refactoring C programs by extracting subroutines from fragments of C code.
Do not reinvent the wheel: wherever possible, use existing technology for the parsing and comprehension of C code. Look at projects such as sparse and Coccinelle.
Your script should work as a filter that puts the free variables at the top, encloses the rest in curly braces, does something helpful with break, continue, and return statements.
That's just a start.
This project is tagged "hard" because it's not clearly specified. The first step (like with many projects) is to work out the specification in more detail.
- Contact: tech-pkg
- Duration estimate: 3 months
The goal of this project is to generate a package or packages that will set up a cross-compiling environment for one (slow) NetBSD architecture on another (fast) NetBSD architecture, starting with (and using) the NetBSD toolchain in src.
The package will require a checked out NetBSD src tree (or as a refinement, parts thereof) and is supposed to generate the necessary cross-building tools using src/build.sh
with appropriate flags; for the necessary /usr/include
and libraries of the slow architecture, build these to fit or use the slow architecture's base.tgz
and comp.tgz
(or parts thereof). As an end result, you should e.g. be able to install the binary cross/pkgsrc-NetBSD-amd64-to-atari
package on your fast amd64 and start building packages for your slow atari system.
Use available packages, like eg pkgtools/pkg_comp
to build the cross-compiling environment where feasible.
As test target for the cross-compiling environment, use pkgtools/pkg_install
, which is readily cross-compilable by itself.
If time permits, test and fix cross-compiling pkgsrc X.org, which was made cross-compilable in an earlier GSoC project, but may have suffered from feature rot since actually cross-compiling it has been too cumbersome for it to see sufficient use.
pkgsrc duplicates NetBSD efforts in maintaining X.org packages. pkgsrc X11, being able to cross-build, could replace base X11 distribution.
The latter decoupling should simplify maintainance of software; updating X11 and associated software becomes easier because of pkgsrc's shorter release cycle and more volatility.
Cross-buildable and cross-installable pkgsrc tools could also simplify maintainance of slower systems by utilising power of faster ones.
The goal of this project is to make it possible to bootstrap pkgsrc using available cross-build tools (e.g. NetBSD's).
This project requires good understanding of cross-development, some knowledge of NetBSD build process or ability to create cross-development toolchain, and familiarity with pkgsrc bootstrapping.
Note: basic infrastructure for this exists as part of various previous GSoC projects. General testing is lacking.
- Contact: tech-kern
- Duration estimate: 3 months
Make NetBSD behave gracefully when a "live" USB/FireWire disk drive is accidentally detached and re-attached by, for example, creating a virtual block device that receives block-read/write commands on behalf of the underlying disk driver.
This device will delegate reads and writes to the disk driver, but it will keep a list of commands that are "outstanding," that is, reads that the disk driver has not completed, and writes that have not "hit the platter," so to speak.
Milestones:
- Provide a character device for userland to read indications that a disk in use was abruptly detached.
- Following disk re-attachment, the virtual block device replays its list of outstanding commands. A correct solution will not replay commands to the wrong disk if the removable was replaced instead of re-attached.
Open questions: Prior art? Isn't this how the Amiga worked? How will this interact with mount/unmount—is there a use-count on devices? Can you leverage "wedges" in your solution? Does any/most/all removable storage indicate reliably when a block written has actually reached the medium?
- Contact: tech-userlevel
- Mentors: David Young
- Duration estimate: 3 months
Write a program that can read an email, infer quoting conventions, discern bottom-posted emails from top-posted emails, and rewrite the email using the conventions that the reader prefers. Then, take it a step further: write a program that can distill an entire email discussion thread into one document where every writer's contribution is properly attributed and appears in its proper place with respect to interleaved quotations.
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
Design and implement a mechanism that allows for fast user level access to kernel time data structures for NetBSD. For certain types of small data structures the system call overhead is significant. This is especially true for frequently invoked system calls like clock_gettime(2) and gettimeofday(2). With the availability of user level readable high frequency counters it is possible to create fast implementations for precision time reading. Optimizing clock_gettime(2) and alike will reduce the strain from applications frequently calling these system calls and improves timing information quality for applications like NTP. The implementation would be based on a to-be-modified version of the timecounters implementation in NetBSD.
Milestones:
- Produce optimizations for clock_gettime
- Produce optimizations for gettimeofday
- Show benchmarks before and after
- start evolving timecounters in NetBSD, demonstrating your improvements
See also the Paper on Timecounters by Poul-Henning Kamp.
- Contact: tech-userlevel, tech-kern
- Duration estimate: 3 months
The existing puffs protocol gives a way to forward kernel-level file system actions to a userspace process. This project generalizes that protocol to allow forwarding file system actions arbitrarily across a network. This will make it possible to mount any kernel file system type from any location on the network, given a suitable arrangement of components.
The file system components to be used are puffs and rump. puffs is used to forward local file system requests from the kernel to userspace and rump is used to facilitate running the kernel file system in userspace as a service daemon.
The milestones are the following:
Write the necessary code to be able to forward requests from one source to another. This involves most likely reworking a bit of the libpuffs option parsing code and creating a puffs client (say, mount_puffs) to be able to forward requests from one location to another. The puffs protocol should be extended to include the necessary new features or a new protocol invented.
Proof-of-concept code for this has already been written. (Where is it?)
Currently the puffs protocol used for communication between the kernel and userland is machine dependent. To facilitate forwarding the protocol to remote hosts, a machine independent version must be specified.
To be able to handle multiple clients, the file systems must be converted to daemons instead of being utilities. This will also, in the case of kernel file system servers, include adding locking to the communication protocol.
The end result will look something like this:
# start serving ffs from /dev/wd0a on port 12675
onehost> ffs_serv -p 12675 /dev/wd0a
# start serving cd9660 from /dev/cd0a on port 12676
onehost> cd9660_serv -p 12675 /dev/cd0a
# meanwhile in outer space, mount anotherhost from port 12675
anotherhost> mount_puffs -t tcp onehost:12675 /mnt
anotherhost> mount
...
anotherhost:12675 on /mnt type <negotiated>
...
anotherhost> cd /mnt
anotherhost> ls
... etc
The implementor should have some familiarity with file systems and network services.
Any proposal should include answers to at least the following questions:
How is this different from NFS?
How is the protocol different from 9p?
How is this scheme going to handle the hard things that NFS doesn't do very well, such as distributed cache consistency?
Given industry trends, why is this project proposing a new protocol instead of conforming to the SOA model?
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 175h
inetd is a classic method for launching network programs on-the-fly and some of its ideas are coming back into vogue. Enhancing this daemon should include investigations into other similar systems in other operating systems.
Primary milestones:
- Prefork: Support pre-forking multiple children and keeping them alive for multiple invocations.
- Per service configuration file: Add a per-service configuration file similar to xinetd.
- Make the rate-limiting feature configurable on a per-service basis.
- Improve the logging and make logging modes configurable on a per-service basis.
Nice to have:
- Add include directives to the configuration language to allow service definitions to be installed in /usr/share or /usr/pkg/share.
- Add a separate way to turn services on and off, so they can be defined statically (such as in /usr/share) and turned on and off from /etc.
- Allow non-privileged users to add/remove/change their own services using a separate utility.
- Integrate with the new blocklist daemon.
- Configuration compatibility for systemd socket activations
- Contact: tech-embed
- Duration estimate: 175h
Produce lessons and a list of affordable parts and free software that NetBSD hobbyists can use to teach themselves JTAG. Write your lessons for a low-cost embedded system or expansion board that NetBSD already supports.
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
Launchd is a MacOS/X utility that is used to start and control daemons similar to init(8), but much more powerful. There was an effort to port launchd to FreeBSD, but it seems to be abandoned. We should first investigate what happened to the FreeBSD effort to avoid duplicating work. The port is not trivial because launchd uses a lot of mach features.
Milestones:
- report of FreeBSD efforts (past and present)
- launchd port replacing: init
- launchd port replacing: rc
- launchd port compatible with: rc.d scripts
- launchd port replacing: watchdogd
Nice to have:
- launchd port replacing/integrating: inetd
- launchd port replacing: atd
- launchd port replacing: crond
- launchd port replacing: (the rest)
- Contact: tech-kern
- Mentors: Marc Balmer
- Duration estimate: 3 months
Design and implement a general API for control of LED and LCD type devices on NetBSD. The API would allow devices to register individual LED and LCD devices, along with a set of capabilities for each one. Devices that wish to display status via an LED/LCD would also register themselves as an event provider. A userland program would control the wiring of each event provider, to each output indicator. The API would need to encompass all types of LCD displays, such as 3 segment LCDs, 7 segment alphanumerics, and simple on/off state LED's. A simple example is a keyboard LED, which is an output indicator, and the caps-lock key being pressed, which is an event provider.
There is prior art in OpenBSD; it should be checked for suitability, and any resulting API should not differ from theirs without reason.
Milestones:
- a port of OpenBSD's LED tools
- a userland tool to control LED
- demonstration of functionality
- integration into NetBSD
- Contact: tech-net
- Mentors: David Young
- Duration estimate: 3 months
Write a library of routines that estimate the expected transmission duration of a queue of 802.3 or 802.11 frames based on the current bit rate, applicable overhead, and the backoff level. The kernel will use this library to help it decide when packet queues have grown too long in order to mitigate "buffer bloat."
- Contact: tech-userlevel
- Mentors: David A. Holland
- Duration estimate: 3 months
The current lpr/lpd system in NetBSD is ancient, and doesn't support modern printer systems very well. Interested parties would do a from scratch rewrite of a new, modular lpr/lpd system that would support both the old lpd protocol and newer, more modern protocols like IPP, and would be able to handle modern printers easily.
This project is not intrinsically difficult, but will involve a rather large chunk of work to complete.
Note that the goal of this exercise is not to reimplement cups -- cups already exists and one like it was enough.
Some notes:
It seems that a useful way to do this would be to divide the printing system in two: a client-side system, which is user-facing and allows submitting print jobs to arbitrary print servers, and a server-side system, which implements queues and knows how to talk to actual printer devices. In the common case where you don't have a local printer but use printers that are out on the network somewhere, the server-side system wouldn't be needed at all. When you do have a local printer, the client-side system would submit jobs to the local server-side system using the lpr protocol (or IPP or something else) over a local socket but otherwise treat it no differently from any other print server.
The other important thing moving forward: lpr needs to learn about MIME types and accept an argument to tell it the MIME types of its input files. The current family of legacy options lpr accepts for file types are so old as to be almost completely useless; meanwhile the standard scheme of guessing file types inside the print system is just a bad design overall. (MIME types aren't great but they're what we have.)
- Contact: tech-kern, tech-security
- Duration estimate: 3 months
swcrypto could use a variety of enhanacements
Milestones/deliverables:
- use multiple cores efficiently (that already works reasonably well for multiple request streams)
- use faster versions of complex transforms (CBC, counter modes) from our in-tree OpenSSL or elsewhere (eg libtomcrypt)
- add support for asymmetric operations (public key)
Extra credit:
- Tie public-key operations into veriexec somehow for extra credit (probably a very good start towards an undergrad thesis project).
- Contact: tech-pkg
- Mentors: Thomas Klausner
- Duration estimate: 350h
Change infrastructure so that dependency information (currently in buildlink3.mk files) is installed with a binary package and is used from there
This is not an easy project.
pkgsrc currently handles dependencies by including buildlink3.mk files spread over pkgsrc. The problem is that these files describe the current state of pkgsrc, not the current state of the installed packages.
For this reason and because most of the information in the files is templated, the buildlink3.mk files should be replaced by the relevant information in an easily parsable manner (not necessarily make syntax) that can be installed with the package. Here the task is to come up with a definitive list of information necessary to replace all the stuff that's currently done in buildlink3.mk files (including: dependency information, lists of headers to pass through or replace by buildlink magic, conditional dependencies, ...)
The next step is to come up with a proposal how to store this information with installed packages and how to make pkgsrc use it.
Then the coding starts to adapt the pkgsrc infrastructure to do it and show with a number of trivial and non-trivial packages that this proposal works.
It would be good to provide scripts that convert from the current state to the new one, and test it with a bulk build.
Of course it's not expected that all packages be converted to the new framework in the course of this project, but the further steps should be made clear.
goals/milestones:
- invent a replacement for buildlink3.mk files, keeping current features
- demonstrate your new tool as a buildlink3.mk replacement including new features
- execute a bulk build with as many packages as possible using the new buildlink
- Contact: tech-pkg
- Mentors: Thomas Klausner
- Duration estimate: 3 months
Put config files (etc/) installed by pkgsrc into some version control system to help keeping track of changes and updating them.
The basic setup might look like this:
- There is a repository containing the config files installed by pkgsrc, starting out empty.
- During package installation, pkgsrc imports the package's config files into the repository onto a branch tagged with the name and version of the package (if available, on a vendor branch). (e.g.: digest-20080510)
After installation, there are two cases:
- the package was not installed before: the package's config files get installed into the live configuration directory and committed to the head of the config repository
- the package was installed before: a configuration update tool should display changes between the new and the previous original version as well as changes between the previous original and installed config file, for each config file the package uses, and support merging the changes made necessary by the package update into the installed config file. Commit the changes to head when the merge is done.
Regular automated check-ins of the entire live pkgsrc configuration should be easy to set up, but also manual check-ins of singular files so the local admin can use meaningful commit messages when they change their config, even if they are not experienced users of version control systems
The actual commands to the version control system should be hidden behind an abstraction layer, and the vcs operations should be kept simple, so that other compatibility layers can be written, and eventually the user can pick their vcs of choice (and also a vcs location of choice, in case e.g. the enterprise configuration repository is on a central subversion server).
milestones/goals:
- choose a VCS system (BSD licensed is a nice-to-have)
- write wrappers around it, or embed its functionality
- demonstrate usage in upgrades
bonus:
- extend functionality into additional VCS systems
This project was done during Google Summer of Code 2018 by Keivan Motavalli Configuration files versioning in pkgsrc project. At the moment the code need to be reviewed and imported in pkgsrc.
- Contact: tech-userlevel
- Duration estimate: 3 months
Using kernel virtualization offered by rump it is possible to start a virtually unlimited amount of TCP/IP stacks on one host and interlink the networking stacks in an arbitrary fashion. The goal of this project is to create a visual GUI tool for creating and managing the networks of rump kernel instances. The main goal is to support testing and development activities.
The implementation should be split into a GUI frontend and a command-line network-control backend. The former can be used to design and edit the network setup, while the latter can be used to start and monitor the virtual network from e.g. automated tests.
The GUI frontend can be implemented in any language that is convenient. The backend should be implemented in a language and fashion which makes it acceptable to import into the NetBSD base system.
The suggested milestones for the project are:
- get familiar and comfortable with starting and stopping rump kernels and configuring virtual network topologies
- come up with an initial backend command language. It can be something as simple as bourne shell functions.
- come with an initial specification and implementation of the GUI tool.
- make the GUI tool be able to save and load network specifications.
- create a number of automated tests to evaluate the usefulness of the tool in test creation.
In case of a capable student, the project goals can be set on a more demanding level. Examples of additional features include:
- defining network characteristics such as jitter and packet loss for links
- real-time monitoring of network parameters during a test run
- UI gimmicks such as zooming and node grouping
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
All architectures suffer from code injection issues because the only writable segment is the PLT/GOT. RELRO (RELocation Read Only) is a mitigation technique that is used during dynamic linking to prevent access to the PLT/GOT. There is partial RELRO which protects that GOT but leaves the PLT writable, and full RELRO that protects both at the expense of performing a full symbol resolution at startup time. The project is about making the necessary modifications to the dynamic loader (ld_elf.so) to make RELRO work.
If that is completed, then we can also add the following improvement: Currently kernels with options PAX_MPROTECT can not execute dynamically linked binaries on most RISC architectures, because the PLT format defined by the ABI of these architectures uses self-modifying code. New binutils versions have introduced a different PLT format (enabled with --secureplt) for alpha and powerpc.
Milestones:
- For all architectures we can improve security by implementing relro2.
- Once this is done, we can improve security for the RISC architectures by adding support for the new PLT formats introduced in binutils 2.17 and gcc4.1 This will require changes to the dynamic loader (ld.elf_so), various assembly headers, and library files.
- Support for both the old and new formats in the same invocation will be required.
Status: * Added support to the dynamic loader (ld.elf_so) to handle protecting the GNU relro section. * Enabled partial RELRO by default on x86.
- Contact: tech-pkg
- Mentors: Aleksej Saushev
- Duration estimate: 350h
pkgsrc is a very flexible package management system. It provides a comprehensible framework to build, test, deploy, and maintain software in its original form (with porter/packager modifications where applicable) as well as with site local modifications and customizations. All this makes pkgsrc suitable to use in diverse environments ranging from small companies up to large enterprises.
While pkgsrc already contains most elements needed to build an authentication server (or an authentication server failover pair), in order to install one, considerable knowledge about the neccessary elements is needed, plus the correct configuration, while in most cases pretty much identical, is tedious and not without pitfalls.
The goal of this project is to create a meta-package that will deploy and pre-configure an authentication server suitable for a single sign-on infrastructure.
Necessary tasks: provide missing packages, provide packages for initial configuration, package or create corresponding tools to manage user accounts, document.
The following topics should be covered:
- PAM integration with OpenLDAP and DBMS;
- Samba with PAM, DBMS and directory integration;
- Kerberos setup;
- OpenLDAP replication;
- DBMS (PostgreSQL is a must, MySQL optional, if time permits), replication (master-master, if possible);
- DNS server with a sane basic dynamic DNS update config using directory and database backend;
- user account management tools (web interface, command line interface, see user(8) manual page, perhaps some scripting interface);
- configuration examples for integration of services (web services, mail, instant messaging, PAM is a must, DBMS and directory optional).
All covered services should be documented, in particular documentation should include:
- initial deployment instructions;
- sample configuration for reasonably simple case;
- instructions how to test if the software works;
- references to full documentation and troubleshooting instructions (if any), both should be provided on-site (i.e. it should be possible to have everything available given pkgsrc snapshot and corresponding distfiles and/or packages on physical media).
In a nutshell, the goal of the project is to make it possible for a qualified system administrator to deploy basic service (without accounts) with a single pkg_add.
- Contact: tech-install
- Mentors: Marc Balmer, Martin Husemann
- Duration estimate: 350h
The goal of this project is to provide an alternative version of the NetBSD system installer with a simple, X based graphical user interface.
The installer currently uses a "homegrown" (similar to CUA) text based interface, thus being easy to use over serial console as well as on a framebuffer console.
The current installer code is partly written in plain C, but in big parts uses C fragments embedded into its own definition language, preprocessed by the "menuc" tool into plain C code and linked against libterminfo.
During this project, the "menuc" tool is modified to optionally generate a different version of the C code, which then is linked against standard X libraries. The C stub fragments sprinkled throughout the menu definitions need to be modified to be reusable for both (text and X) versions. Where needed the fragments can just call C functions, which have different implementations (selected via a new ifdef).
Since the end result should be able to run off an enhanced install CD, the selection of widgets used for the GUI is limited. Only base X libraries are available. A look & feel similar to current xdm would be a good start.
Developement can be done on an existing system, testing does not require actuall installation on real hardware.
An optional extension of the project is to modify the creation of one or more port's install CD to make use of the new xsysinst.
Milestones include: * modify the "menuc" tool to support X * keep text/serial console installing * demonstrate a GUI install * demonstrate fallback to the text installer
The candidate must have:
- familiarity with the system installer. You should have used sysinst to install the system.
- familiarity with C and X programming.
The following would also be useful:
- familiarity with NetBSD.
- familiarity with user interface programming using in-tree X widgets.
References:
- sysinst source (opengrok)
- vnconfig(8) manual page
- Contact: tech-kern
Port valgrind to NetBSD for pkgsrc, then use it to do an audit of any memory leakage.
See also http://valgrind.org and http://vg4nbsd.berlios.de for work in progress.
- Contact: tech-toolchain
- Mentors: Jörg Sonnenberger
- Duration estimate: 350h
NetBSD supports a number of platforms where both 32bit and 64bit execution is possible. The more well known example is the i386/AMD64 pair and the other important one is SPARC/SPARC64. On this platforms it is highly desirable to allow running all 32bit applications with a 64bit kernel. This is the purpose of the netbsd32 compatibility layer.
At the moment, the netbsd32 layer consists of a number of system call stubs and structure definitions written and maintained by hand. It is hard to ensure that the stubs and definitions are up-to-date and correct. One complication is the difference in alignment rules. On i386 uint64_t has a 32bit alignment, but on AMD64 it uses natural (64bit) alignment. This and the resulting padding introduced by the compiler can create hard to find bugs.
goals/milestones:
- replace the manual labour with an automatic tool
This tool should allow both verification / generation of structure definitions for use in netbsd32 code allow generation of system call stubs and conversion functions. Generated stubs should also ensure that no kernel stack data is leaked in hidden padding without having to resort to unnecessary large memset calls.
For this purpose, the Clang C parser or the libclang frontend can be used to analyse the C code.
This page contains the list of all available projects, broken by topic and difficulty. The topics are as follows:
File system / storage projects
Easy
Medium
- Buffer queue coalescing and splitting
- Semantics of ..
- Defragmentation for FFS
- Discard for FFS
- Flash translation layer
- Apple ISO9660 extensions
- Rewrite kernfs and procfs
- Add directory notify to kqueue
- Quotas for LFS
- Make MAXPHYS dynamic (underway; stalled)
- Kernel plugins for FS policy logic (research)
- Discard for RAIDframe
- RAID 6 in RAIDframe (175h)
- RAIDframe scrubbing (175h)
- Per-user memory limits for tmpfs
- Add snapshots to tmpfs
- Transparent full-disk encryption
Hard
Networking projects
Easy
Medium
Hard
Port-related projects
Easy
Medium
Hard
Other kernel-level projects
Easy
Medium
- Convert a Wi-Fi driver to the new Wi-Fi stack (175h)
- ALTQ Refactoring and NPF Integration (350h)
- Binary compatibility for puffs backend
- Compressed Cache System
- DTrace syscall provider
- Test root device and root file system selection (350h)
- LED/LCD Generic API
- Packet Latency Library
- Locking pages into memory, redux
- OpenCrypto swcrypto(4) enhancements
- RFC 5927 countermeasures against IPv6 ICMP attacks on TCP
- Add a kernel API for timed power-on
- auto create swap on memory pressure (175h)
- Merge code from two Realtek Wifi Drivers (175h)
- Userland PCI drivers (350h)
- Porting Raspberry Pi graphics -- VC4 DRM driver (350h)
- VMWare graphical acceleration (350h)
- Dom0 SMP support
- Execute in place support
Hard
- Real asynchronous I/O (350h)
- Lockless, atomic FIFO/LIFO queues
- Lockless, atomic and generic Radix/Patricia trees
- Support Broadcom SoftMAC WiFi adapters
- Emulating android programs
- Coordinated caching and scheduling
- support jails-like features
- Kernel continuations
- Language-neutral interface specifications (research)
- Tickless NetBSD with high-resolution timers (350h)
- Port dom0 to the ARM cpu architecture
- Blktap2 driver support
- Boot path cleanup to remove #ifdef XEN clutter
- pv-on-hvm - Paravirtualised Driver Support with drivers in an HVM container
- ACPI power management (sleep/wakeup) support for Xen
- pvfb framebuffer video driver support (frontend)
- Xen DRMKMS support (GUI support on dom0)
- RAM hot-add
- Xen: direct map support (with large pages)
- pvscsi driver support (frontend/backend)
- pvusb driver support (frontend/backend)
- libvirt support for Xen
- xhci scatter-gather support
Userland projects
Easy
Medium
- Add UEFI boot options
- Audio visualizer for the NetBSD base system (350h)
- Suffix and pattern rules in BSD make
- Light weight precision user level time reading (350h)
- Query optimizer for find(1) (350h)
- System-level font handling in Unix
- gomoku(6)'s brain
- IKEv2 daemon for NetBSD (350h)
- Port launchd (350h)
- New LPR/LPD for NetBSD
- Add support for OpenCL and Vulkan to NetBSD xsrc (175h)
- Automatic tests for PAM
- Efficient Package Distribution
- Visualization tool for arbitrary network topology
- SASL-C implementation for the OpenLDAP client (350h)
- Secure-PLT - supporting RELRO binaries (350h)
- Research and integrate the static code analyzers with the NetBSD codebase (350h)
- Sysinst alternative interface (350h)
Hard
Desktop projects
Easy
Medium
Hard
Code Quality Improvement projects
Easy
Medium
Hard
pkgsrc projects
Easy
- Creating .deb packages
- Improve libvirt support in NetBSD pkgsrc
- Improve UI of pkgsrc MESSAGE (175h)
- Version control config files
- Isolate builds from user environment
- Port Mancoosi to pkgsrc
- Spawn support in pkgsrc tools
- Further isolate builds from system environment
- Authentication server meta-package (350h)
Medium
- Make signed binary pkgs for NetBSD happen
- Bulk build tracker application
- Support pkgsrc cross-bootstrapping
- Create a cross-compile environment package for pkgsrc on NetBSD
- Bulk builds with download isolation
- Port the Enlightenment desktop environment to NetBSD (350h)
- pkgin improvements (350h)
- Porting Chromium web browser to NetBSD
- Split debug symbols for pkgsrc builds similar to redhat
- multipkg pkgsrc
- Web interface for pbulk
- Improve support for NetBSD sensors and audio APIs in third-party software (350h)
- Undo support for pkgsrc
Hard
Miscellaneous projects
Easy
Medium
Hard
Unclassified projects
This section contains the list of projects that have not been classified: i.e. projects that lack a tag defining their category and/or their difficulty.
Theoretically, this section should be empty. In practice, however, it is all too easy to forget to tag a project appropriately when defining it, and this section is intended to help in spotting such misclassified projects. Please note that misclassified projects may not appear in other indexes, so it is important to spot them!
Projects without a category definition
Projects without a difficulty definition
Notes
This page is (mostly) generated automatically from the project pages themselves. To add a new project, just add a new project page and tag it with the proper tags. Also note that the last modification date appearing below is the last modification date of the list template, not of any of the projects themselves; nor is it the date when the last new project was added.
- Contact: tech-toolchain
- Mentors: Jörg Sonnenberger
- Duration estimate: 350h
NetBSD supports a number of platforms where both 32bit and 64bit execution is possible. The more well known example is the i386/AMD64 pair and the other important one is SPARC/SPARC64. On this platforms it is highly desirable to allow running all 32bit applications with a 64bit kernel. This is the purpose of the netbsd32 compatibility layer.
At the moment, the netbsd32 layer consists of a number of system call stubs and structure definitions written and maintained by hand. It is hard to ensure that the stubs and definitions are up-to-date and correct. One complication is the difference in alignment rules. On i386 uint64_t has a 32bit alignment, but on AMD64 it uses natural (64bit) alignment. This and the resulting padding introduced by the compiler can create hard to find bugs.
goals/milestones:
- replace the manual labour with an automatic tool
This tool should allow both verification / generation of structure definitions for use in netbsd32 code allow generation of system call stubs and conversion functions. Generated stubs should also ensure that no kernel stack data is leaked in hidden padding without having to resort to unnecessary large memset calls.
For this purpose, the Clang C parser or the libclang frontend can be used to analyse the C code.
- Contact: tech-kern
Port valgrind to NetBSD for pkgsrc, then use it to do an audit of any memory leakage.
See also http://valgrind.org and http://vg4nbsd.berlios.de for work in progress.
- Contact: tech-install
- Mentors: Marc Balmer, Martin Husemann
- Duration estimate: 350h
The goal of this project is to provide an alternative version of the NetBSD system installer with a simple, X based graphical user interface.
The installer currently uses a "homegrown" (similar to CUA) text based interface, thus being easy to use over serial console as well as on a framebuffer console.
The current installer code is partly written in plain C, but in big parts uses C fragments embedded into its own definition language, preprocessed by the "menuc" tool into plain C code and linked against libterminfo.
During this project, the "menuc" tool is modified to optionally generate a different version of the C code, which then is linked against standard X libraries. The C stub fragments sprinkled throughout the menu definitions need to be modified to be reusable for both (text and X) versions. Where needed the fragments can just call C functions, which have different implementations (selected via a new ifdef).
Since the end result should be able to run off an enhanced install CD, the selection of widgets used for the GUI is limited. Only base X libraries are available. A look & feel similar to current xdm would be a good start.
Developement can be done on an existing system, testing does not require actuall installation on real hardware.
An optional extension of the project is to modify the creation of one or more port's install CD to make use of the new xsysinst.
Milestones include: * modify the "menuc" tool to support X * keep text/serial console installing * demonstrate a GUI install * demonstrate fallback to the text installer
The candidate must have:
- familiarity with the system installer. You should have used sysinst to install the system.
- familiarity with C and X programming.
The following would also be useful:
- familiarity with NetBSD.
- familiarity with user interface programming using in-tree X widgets.
References:
- sysinst source (opengrok)
- vnconfig(8) manual page
- Contact: tech-install
- Mentors: Martin Husemann
- Duration estimate: 3 months
IMPORTANT: This project was completed by Eugene Lozovoy. You may still contact the people above for details, but please do not submit an application for this project.
The goal of this project is to enhance the NetBSD system installer (sysinst) to provide additional support for (in order):
- partition disks using GPT
- prepare multiple disks
- combine multiple partitions to raid/lvm volumes
- encrypt partitions via cgd
- other enhancements
The installer currently supports installing the system to any available single disk. It is possible to select which parts (distribution sets) of the system to install, and also to customise the disk partition layout. Sysinst can also use vnode pseudo disks, so can be tested without the need to re-install the host system.
The first goal is to allow partitioning disks using the GUID partition table (GPT). The current partitioning code is tied heavily to the BSD disklabel. Fixing it is straight forward, but both methods have to be offered and only some architectures can boot from GPT disks.
The second goal is to allow preparing several disks. This part would also be usefull (see "other enhancements" below) as a stand-alone tool. Various disks may be partitioned using different shemes, for example when the boot disk can not use GPT, but secondary (large) disks should use it. This part also is a direct prerequisite for the following one.
The third goal is to (optionally) create logical volumes from multiple partitions or disks, either using raidframe or LVM. This includes making the volumes bootable, if possible.
The fourth goal is to add support for creating and installing on to cgd (encrypted) partitions. The initial support will not be for the boot partition, but other partitions should be supported.
The other enhancements that might be possible are (not in priority order):
user interface
- customise colours
- add "back" and "forward" menu options
- run parts of the installer independently (e.g. disk partitioning, set installation)
cgd enhancements
- add the ability to encrypt the whole disk and to enter the decryption key at boot time
automated test setup
- add the ability to install Anita for automated testing
The candidate must have:
- familiarity with the system installer. You should have used sysinst to install the system.
- familiarity with C programming. The system installer program consists of C code.
- a test system, preferably with a 2nd bootable device.
The following would also be useful:
- familiarity with NetBSD.
- familiarity with user interface programming using curses.
References:
- sysinst source (opengrok)
- vnconfig(8) manual page
- raidctl(8) manual page
- cgdconfig(8) manual page
- LVM on NetBSD
- Anita automated testing
- Contact: tech-pkg
- Mentors: Thomas Klausner
- Duration estimate: 3 months
IMPORTANT: This project was completed by various NetBSD developers. You may still contact the people above for details, but please do not submit an application for this project.
sysinst, the NetBSD installation tool, should be able to get NetBSD set up with most packages relevant to the end user during the installation step, to make it possible to get a usable system during the initial setup. The packages might be simple packages like screen or bigger ones like firefox. Configuration of the packages is not required to happen in sysinst.
A short overview of the milestones involved:
Improve sysinst so it can list a pkgsummary.gz file on the install media and offer to install a subset of them. There should be a chooser included in sysinst where the user can select the packages they want to install.
There should be some pre-defined sets included with sysinst that define the necessary packages for a Gnome or KDE desktop, so the user just chooses them (possibly just by adding appropriate meta packages to pkgsrc).
Support fetching a pkgsummary file from a remote host (usually ftp.NetBSD.org with useful default PATH for architecture/release, but overridable) and offer installations of packages from there as well.
For bonus points (just as last step when the rest works): * Come up with lists of packages for amd64 to fill a CD or DVD (including the NetBSD base system)
- Contact: tech-pkg
- Mentors: Aleksej Saushev
- Duration estimate: 350h
pkgsrc is a very flexible package management system. It provides a comprehensible framework to build, test, deploy, and maintain software in its original form (with porter/packager modifications where applicable) as well as with site local modifications and customizations. All this makes pkgsrc suitable to use in diverse environments ranging from small companies up to large enterprises.
While pkgsrc already contains most elements needed to build an authentication server (or an authentication server failover pair), in order to install one, considerable knowledge about the neccessary elements is needed, plus the correct configuration, while in most cases pretty much identical, is tedious and not without pitfalls.
The goal of this project is to create a meta-package that will deploy and pre-configure an authentication server suitable for a single sign-on infrastructure.
Necessary tasks: provide missing packages, provide packages for initial configuration, package or create corresponding tools to manage user accounts, document.
The following topics should be covered:
- PAM integration with OpenLDAP and DBMS;
- Samba with PAM, DBMS and directory integration;
- Kerberos setup;
- OpenLDAP replication;
- DBMS (PostgreSQL is a must, MySQL optional, if time permits), replication (master-master, if possible);
- DNS server with a sane basic dynamic DNS update config using directory and database backend;
- user account management tools (web interface, command line interface, see user(8) manual page, perhaps some scripting interface);
- configuration examples for integration of services (web services, mail, instant messaging, PAM is a must, DBMS and directory optional).
All covered services should be documented, in particular documentation should include:
- initial deployment instructions;
- sample configuration for reasonably simple case;
- instructions how to test if the software works;
- references to full documentation and troubleshooting instructions (if any), both should be provided on-site (i.e. it should be possible to have everything available given pkgsrc snapshot and corresponding distfiles and/or packages on physical media).
In a nutshell, the goal of the project is to make it possible for a qualified system administrator to deploy basic service (without accounts) with a single pkg_add.
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
All architectures suffer from code injection issues because the only writable segment is the PLT/GOT. RELRO (RELocation Read Only) is a mitigation technique that is used during dynamic linking to prevent access to the PLT/GOT. There is partial RELRO which protects that GOT but leaves the PLT writable, and full RELRO that protects both at the expense of performing a full symbol resolution at startup time. The project is about making the necessary modifications to the dynamic loader (ld_elf.so) to make RELRO work.
If that is completed, then we can also add the following improvement: Currently kernels with options PAX_MPROTECT can not execute dynamically linked binaries on most RISC architectures, because the PLT format defined by the ABI of these architectures uses self-modifying code. New binutils versions have introduced a different PLT format (enabled with --secureplt) for alpha and powerpc.
Milestones:
- For all architectures we can improve security by implementing relro2.
- Once this is done, we can improve security for the RISC architectures by adding support for the new PLT formats introduced in binutils 2.17 and gcc4.1 This will require changes to the dynamic loader (ld.elf_so), various assembly headers, and library files.
- Support for both the old and new formats in the same invocation will be required.
Status: * Added support to the dynamic loader (ld.elf_so) to handle protecting the GNU relro section. * Enabled partial RELRO by default on x86.
- Contact: tech-userlevel
- Duration estimate: 3 months
Using kernel virtualization offered by rump it is possible to start a virtually unlimited amount of TCP/IP stacks on one host and interlink the networking stacks in an arbitrary fashion. The goal of this project is to create a visual GUI tool for creating and managing the networks of rump kernel instances. The main goal is to support testing and development activities.
The implementation should be split into a GUI frontend and a command-line network-control backend. The former can be used to design and edit the network setup, while the latter can be used to start and monitor the virtual network from e.g. automated tests.
The GUI frontend can be implemented in any language that is convenient. The backend should be implemented in a language and fashion which makes it acceptable to import into the NetBSD base system.
The suggested milestones for the project are:
- get familiar and comfortable with starting and stopping rump kernels and configuring virtual network topologies
- come up with an initial backend command language. It can be something as simple as bourne shell functions.
- come with an initial specification and implementation of the GUI tool.
- make the GUI tool be able to save and load network specifications.
- create a number of automated tests to evaluate the usefulness of the tool in test creation.
In case of a capable student, the project goals can be set on a more demanding level. Examples of additional features include:
- defining network characteristics such as jitter and packet loss for links
- real-time monitoring of network parameters during a test run
- UI gimmicks such as zooming and node grouping
- Contact: tech-net
- Mentors: Alistair G. Crooks
- Duration estimate: 3 months
IMPORTANT: This project was completed by Vlad Balan. You may still contact the people above for details, but please do not submit an application for this project.
When using connect(2) to connect the client end of a socket, the system will choose the next number of the socket for you. Having an easily guessed port number can allow various attacks to take place. Choosing the next port number at random, whilst not perfect, gives more protection against these attacks. RFC 6056 gives an excellent overview of the algorithms in use when "randomising source ports", giving examples from FreeBSD, OpenBSD and Linux.
This project has a number of goals:
- Evaluate and prioritise the algorithms in RFC 6056.
- Implement the algorithms in RFC 6056, and make it possible to choose between them with sysctl.
- Contact: tech-userlevel
- Mentors: Antti Kantee
- Duration estimate: 3 months
IMPORTANT: This project was completed by Vyacheslav Matyushin. You may still contact the people above for details, but please do not submit an application for this project.
As is well-known, puffs(3) is the NetBSD userspace file system framework. It provides support for implementing file servers in userspace. A lesser known "cousin" of puffs is the Pass-to-Userspace Device, or pud(4) framework, which provides support for implementing character and block device servers in userspace. Both use putter(9) for transmitting requests to and from the kernel.
Currently, puffs includes a userspace support library: libpuffs. It provides two facets:
- file system routines and callback interface
- generic parts, including kernel request handling
On the other hand, pud is without a userspace support library and servers talk with kernel directly with read()
and write()
.
The goal of the project is to modify libpuffs into a generic library which pud and puffs can share, and provide libpuffs and libpud built on this base. The submission should include a rough analysis of the source modules of libpuffs and what is going to happen to them during the project.
This project is fairly straightforward, but involves a reasonable amount of work. Plenty of documentation exists to make the learning curve manageable. This project is an excellent opportunity to practise getting your hands dirty with real world systems.
- Contact: tech-pkg
- Mentors: Jeremy C. Reed, Aleksej Saushev
- Duration estimate: 3 months
IMPORTANT: This project was completed by Anton Panev. You may still contact the people above for details, but please do not submit an application for this project.
In 2006 and 2007, the pkgsrc build system was abstracted to allow packaging for other package system packages. For details see pkgsrc/mk/flavor/README and the original commit message.
This means pkgsrc build infrastructure may be used to potentially create packages that may be installed using non-NetBSD packaging tools (i.e. not using NetBSD's pkg_add). Note: this is not about cross-builds; the packages may be built on an appropriate host system using the pkgsrc collection.
This project may include creating shell command wrappers to mimic pkgsrc build needs as documented in README. (The wrappers only needed for building packages and not for using packages.) Also the project may include implementing package-specific make targets as documented in README. Also see suggestions to do in the initial commit message.
The goals of this project include:
- Add support for RPM, dpkg, SVR4, PC-BSD PBI, and/or the Solaris native package system(s).
- Be able to build at least 100 packages and demonstrate that the packages can be installed and de-installed using the corresponding system's native package tools.
- Document support and interaction with existing packages.
- Contact: tech-pkg
- Mentors: Thomas Klausner
- Duration estimate: 3 months
IMPORTANT: This project was completed by Johnny C. Lam. You may still contact the people above for details, but please do not submit an application for this project.
Instead of including install scripts from the infrastructure into every binary package, just include the necessary information and split the scripts off into a separate package that is installed first (right after bootstrap, as soon as the first package needs it). This affects user creation, installation of tex packages, ...
Milestones:
- identify example packages installing users, groups, and documentation
- demonstrate pkgsrc packages which add users, etc
- Also add support for actions that happen once after a big upgrade session, instead of once per package (e.g. ls-lR rebuild for tex).
- convert some existing packages to use this new framework
- allow options framework to configure these resources per-package
An intermediate step would be to replace various remaining INSTALL scripts by declarative statements and install script snippets using them.
This was implemented via the new pkgtasks framework that can be enabled by
setting in mk.conf
:
_USE_NEW_PKGINSTALL= yes
- Contact: tech-pkg
- Mentors: Thomas Klausner
- Duration estimate: 3 months
Put config files (etc/) installed by pkgsrc into some version control system to help keeping track of changes and updating them.
The basic setup might look like this:
- There is a repository containing the config files installed by pkgsrc, starting out empty.
- During package installation, pkgsrc imports the package's config files into the repository onto a branch tagged with the name and version of the package (if available, on a vendor branch). (e.g.: digest-20080510)
After installation, there are two cases:
- the package was not installed before: the package's config files get installed into the live configuration directory and committed to the head of the config repository
- the package was installed before: a configuration update tool should display changes between the new and the previous original version as well as changes between the previous original and installed config file, for each config file the package uses, and support merging the changes made necessary by the package update into the installed config file. Commit the changes to head when the merge is done.
Regular automated check-ins of the entire live pkgsrc configuration should be easy to set up, but also manual check-ins of singular files so the local admin can use meaningful commit messages when they change their config, even if they are not experienced users of version control systems
The actual commands to the version control system should be hidden behind an abstraction layer, and the vcs operations should be kept simple, so that other compatibility layers can be written, and eventually the user can pick their vcs of choice (and also a vcs location of choice, in case e.g. the enterprise configuration repository is on a central subversion server).
milestones/goals:
- choose a VCS system (BSD licensed is a nice-to-have)
- write wrappers around it, or embed its functionality
- demonstrate usage in upgrades
bonus:
- extend functionality into additional VCS systems
This project was done during Google Summer of Code 2018 by Keivan Motavalli Configuration files versioning in pkgsrc project. At the moment the code need to be reviewed and imported in pkgsrc.
- Contact: tech-pkg
- Mentors: Thomas Klausner
- Duration estimate: 350h
Change infrastructure so that dependency information (currently in buildlink3.mk files) is installed with a binary package and is used from there
This is not an easy project.
pkgsrc currently handles dependencies by including buildlink3.mk files spread over pkgsrc. The problem is that these files describe the current state of pkgsrc, not the current state of the installed packages.
For this reason and because most of the information in the files is templated, the buildlink3.mk files should be replaced by the relevant information in an easily parsable manner (not necessarily make syntax) that can be installed with the package. Here the task is to come up with a definitive list of information necessary to replace all the stuff that's currently done in buildlink3.mk files (including: dependency information, lists of headers to pass through or replace by buildlink magic, conditional dependencies, ...)
The next step is to come up with a proposal how to store this information with installed packages and how to make pkgsrc use it.
Then the coding starts to adapt the pkgsrc infrastructure to do it and show with a number of trivial and non-trivial packages that this proposal works.
It would be good to provide scripts that convert from the current state to the new one, and test it with a bulk build.
Of course it's not expected that all packages be converted to the new framework in the course of this project, but the further steps should be made clear.
goals/milestones:
- invent a replacement for buildlink3.mk files, keeping current features
- demonstrate your new tool as a buildlink3.mk replacement including new features
- execute a bulk build with as many packages as possible using the new buildlink
- Contact: tech-kern, tech-security
- Duration estimate: 3 months
swcrypto could use a variety of enhanacements
Milestones/deliverables:
- use multiple cores efficiently (that already works reasonably well for multiple request streams)
- use faster versions of complex transforms (CBC, counter modes) from our in-tree OpenSSL or elsewhere (eg libtomcrypt)
- add support for asymmetric operations (public key)
Extra credit:
- Tie public-key operations into veriexec somehow for extra credit (probably a very good start towards an undergrad thesis project).
- Contact: tech-userlevel
- Mentors: David A. Holland
- Duration estimate: 3 months
The current lpr/lpd system in NetBSD is ancient, and doesn't support modern printer systems very well. Interested parties would do a from scratch rewrite of a new, modular lpr/lpd system that would support both the old lpd protocol and newer, more modern protocols like IPP, and would be able to handle modern printers easily.
This project is not intrinsically difficult, but will involve a rather large chunk of work to complete.
Note that the goal of this exercise is not to reimplement cups -- cups already exists and one like it was enough.
Some notes:
It seems that a useful way to do this would be to divide the printing system in two: a client-side system, which is user-facing and allows submitting print jobs to arbitrary print servers, and a server-side system, which implements queues and knows how to talk to actual printer devices. In the common case where you don't have a local printer but use printers that are out on the network somewhere, the server-side system wouldn't be needed at all. When you do have a local printer, the client-side system would submit jobs to the local server-side system using the lpr protocol (or IPP or something else) over a local socket but otherwise treat it no differently from any other print server.
The other important thing moving forward: lpr needs to learn about MIME types and accept an argument to tell it the MIME types of its input files. The current family of legacy options lpr accepts for file types are so old as to be almost completely useless; meanwhile the standard scheme of guessing file types inside the print system is just a bad design overall. (MIME types aren't great but they're what we have.)
- Contact: tech-userlevel
- Mentors: Brett Lymn
- Duration estimate: 3 months
IMPORTANT: This project was completed by Brett Lymn. You may still contact the people above for details, but please do not submit an application for this project.
Updating an operating system image can be fraught with danger, an error could cause the system to be unbootable and require significant work to restore the system to operation. The aim of this project is to permit a system to be updated while it is running only requiring a reboot to activate the updated system and provide the facility to rollback to a "known good" system in the event that the update causes regressions.
Milestones for this project:
- Make a copy of the currently running system
- Either apply patches, install binary sets, run a source build with the copy as the install target
- Update critical system files to reference the copy (things like fstab)
- Update the boot menu to make the copy the default boot target, the current running system should be left as an option to boot to
The following link shows how live upgrade works on Solaris: http://docs.oracle.com/cd/E19455-01/806-7933/overview-3/index.html The aim is to implement something that is functionally similar to this which can not only be used for upgrading but for any risky operation where a reliable back out is required.
- Contact: tech-net
- Mentors: David Young
- Duration estimate: 3 months
Write a library of routines that estimate the expected transmission duration of a queue of 802.3 or 802.11 frames based on the current bit rate, applicable overhead, and the backoff level. The kernel will use this library to help it decide when packet queues have grown too long in order to mitigate "buffer bloat."
- Contact: tech-userlevel
- Mentors: Jörg Sonnenberger
- Duration estimate: 3 months
IMPORTANT: This project was completed by William Morr. You may still contact the people above for details, but please do not submit an application for this project.
NetBSD provides a BSD licensed implementation of libintl. This implementation is based on the specifications from GNU gettext. It has not kept up with the development of gettext in the last few years though, lacking e.g. support for context specific translations. NetBSD currently also has to depend on GNU gettext to recreate the message catalogs.
Milestones for this project include:
- Restore full API compatibility with current gettext. At the time of writing, this is gettext 0.18.1.1.
- Implement support for the extensions in the message catalog format. libintl should be able to process all .mo files from current gettext and return the same results via the API.
- Provide a clean implementation of msgfmt.
- Other components of gettext like msgmerge and the gettext frontend should be evaluated case-by-case if they are useful for the base system and whether third party software in pkgsrc depends on them.
- demonstrate the elimination of GNU gettext dependencies
- Contact: tech-userlevel
- Mentors: Brett Lymn
- Duration estimate: 3 months
IMPORTANT: This project was completed by Naman Jain. You may still contact the people above for details, but please do not submit an application for this project.
Update: This project was completed by Naman Jain during Google Summer of Code 2020 and merged into NetBSD. More tests are still needed for completion's sake.
The curses library is an important part of the NetBSD operating system, many applications rely on the correct functioning of the library. Performing modifications on the curses library can be difficult because the effects of the change may be subtle and can introduce bugs that are not detected for a long time.
The testing framework has been written to run under the atf framework but has not been committed to the tree yet.
The student undertaking this project will be provided with the testing framework and will use this to generate test cases for curses library calls. Most of the work will require analytical skills to verify the output of the test is actually correct before encapsulating that output into a validation file.
Milestones for this project:
- produce a suite of high quality tests for the curses library
- These tests should exercise every aspect of the library functionality.
This project will need a good understanding of the curses library and will provide the student with a much deeper understanding of the operation of curses.
- Contact: tech-kern
- Mentors: Marc Balmer
- Duration estimate: 3 months
Design and implement a general API for control of LED and LCD type devices on NetBSD. The API would allow devices to register individual LED and LCD devices, along with a set of capabilities for each one. Devices that wish to display status via an LED/LCD would also register themselves as an event provider. A userland program would control the wiring of each event provider, to each output indicator. The API would need to encompass all types of LCD displays, such as 3 segment LCDs, 7 segment alphanumerics, and simple on/off state LED's. A simple example is a keyboard LED, which is an output indicator, and the caps-lock key being pressed, which is an event provider.
There is prior art in OpenBSD; it should be checked for suitability, and any resulting API should not differ from theirs without reason.
Milestones:
- a port of OpenBSD's LED tools
- a userland tool to control LED
- demonstration of functionality
- integration into NetBSD
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
Launchd is a MacOS/X utility that is used to start and control daemons similar to init(8), but much more powerful. There was an effort to port launchd to FreeBSD, but it seems to be abandoned. We should first investigate what happened to the FreeBSD effort to avoid duplicating work. The port is not trivial because launchd uses a lot of mach features.
Milestones:
- report of FreeBSD efforts (past and present)
- launchd port replacing: init
- launchd port replacing: rc
- launchd port compatible with: rc.d scripts
- launchd port replacing: watchdogd
Nice to have:
- launchd port replacing/integrating: inetd
- launchd port replacing: atd
- launchd port replacing: crond
- launchd port replacing: (the rest)
- Contact: tech-kern
- Mentors: Christos Zoulas
- Duration estimate: 3 months
IMPORTANT: This project was completed by Vlad Balan. You may still contact the people above for details, but please do not submit an application for this project.
Many applications that use UDP unicast or multicast to receive data, need to store the data together with its reception time, or time the arrival time of packets as precisely as possible. This is required for example in order to be able to replay the data in simulated real-time to be able to do further performance analysis or quality control. Right now the only way to do this is to call recv(2)/recvmsg(2)/recvfrom(2) to grab the data, followed by gettimeofday(2)/clock_gettime(2). This is undesirable because:
- It doubles the number of system calls limiting the effective maximum reception rate.
- The timing of the packet is imprecise because there non-deterministic delay from the time that the packet hit the network adapter till the time userland received it.
Linux provides a socket control message to add timestamp ancillary data to the the socket, which can be controlled by 2 different socket options SO_TIMESTAMP and SO_TIMESTAMPNS, using setsockopt(2). Recently linux added more SO_TIMESTAMPING_* options to provide even more precise timestamps from NIC cards that support them. This project is about implementing those options on NetBSD. Their implementation should be done as close to the code that does reception as possible to minimize timestamp delay. The SO_TIMESTAMP option is already done in the kernel (perhaps only partially): kern, ip, raw, udp, ip6, sys. A good reading would also be timestamping.c
This project is to finish implementing SO_TIMESTAMP in the kernel, test it, add SO_TIMESTAMPNS (both very easy, should be a few hours of work), and then design and implement the SO_TIMESTAMPING_* options, get at least one driver working with them and test them.
- Contact: tech-embed
- Duration estimate: 175h
Produce lessons and a list of affordable parts and free software that NetBSD hobbyists can use to teach themselves JTAG. Write your lessons for a low-cost embedded system or expansion board that NetBSD already supports.
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 175h
inetd is a classic method for launching network programs on-the-fly and some of its ideas are coming back into vogue. Enhancing this daemon should include investigations into other similar systems in other operating systems.
Primary milestones:
- Prefork: Support pre-forking multiple children and keeping them alive for multiple invocations.
- Per service configuration file: Add a per-service configuration file similar to xinetd.
- Make the rate-limiting feature configurable on a per-service basis.
- Improve the logging and make logging modes configurable on a per-service basis.
Nice to have:
- Add include directives to the configuration language to allow service definitions to be installed in /usr/share or /usr/pkg/share.
- Add a separate way to turn services on and off, so they can be defined statically (such as in /usr/share) and turned on and off from /etc.
- Allow non-privileged users to add/remove/change their own services using a separate utility.
- Integrate with the new blocklist daemon.
- Configuration compatibility for systemd socket activations
- Contact: tech-pkg, tech-kern
- Mentors: Julio Merino
- Duration estimate: 3 months
IMPORTANT: This project was completed by Dmitry Matveev. You may still contact the people above for details, but please do not submit an application for this project.
Monitoring of file system activity is a key piece of a modern desktop environment because it provides instant feedback to the user about any changes performed to the disk. In particular, file monitoring is an internal component of the Gnome infrastructure that allows the desktop to receive notifications when files or directories change. This way, if, say, you are viewing the Downloads folder in Nautilus and you start downloading a file from Epiphany into that folder, Nautilus will realize the new file and show it immediately without requiring a manual refresh.
How to monitor the file system depends on the operating system. There are basically two approaches: polling and asynchronous notifications. Polling is suboptimal because the notifications are usually delayed. Asynchronous notifications are tied to the operating system: Linux provides inotify, NetBSD provides kqueue and other systems provide their own APIs.
In the past, Gnome monitored the file system via a combination of FAM (a system-level service that provides an API to file system monitoring) and GNOME VFS (a high-level layer that hides the interaction with FAM). This approach was good in spirit (client/server separation) but didn't work well in NetBSD:
- FAM is abandoned.
- Does not support kqueue out of the box.
- FAM runs as root.
- FAM is too hard to set up: it requires rpcbind, an addition to etc/services, a sysctl tweak, and the configuration of a system-level daemon.
To solve some of these problems, a drop-in replacement for FAM was started. Gamin, as it is known, still does not fix everything:
- Gamin is abandoned.
- Supports kqueue out of the box, but does not work very well.
- Actually, Gamin itself does not work. Running the tests provided by the distfile in a modern Linux system results in several test failures.
Did you notice the abandoned pattern above? This is important: in the new world order, Gnome does not use FAM any more.
The new structure to monitor files is: the low-level glib library provides the gio module, which has a new file system monitoring API. The GVFS module provides higher level abstractions to file system management, and relies on gio for file system monitoring. There is no GNOME VFS any more.
The problematic point is: gio uses inotify directly; no abstraction layers in between. FAM support is still there for platforms without inotify, but as it is not used in Linux any more, both the FAM package and the support for it rot.
The goal of this project is to fix this situation. There are two possible approaches. The first is to extend the gio library with a new module to support kqueue, the native interface in NetBSD for file monitoring; this new module can be submitted upstream and will benefit all other BSD systems, but it will not help other applications that use inotify. The second is to write a compatibility library that exposes an inotify interface on top of kqueue; this cannot be sent upstream but it helps any application using inotify.
The preferred method is to write a new gio plugin to support kqueue. Henceforth, the deliverables for the project include a new gio module to support kqueue and, if time permits, a standalone library that implements an inotify interface on top of kqueue.
Assuming you are not familiar with kqueue, you might want to follow these steps to get started:
- Read the kqueue(2) manual page to get a general idea of the API and this example for starters.
- Analyze the kqueue support for FAM which is in pkgsrc/sysutils/fam/files/IMonKQueue.c++. This might be helpful to see how to apply kqueue to monitor for the events used by GNOME.
- Read the modules/inotify* files in gnome-vfs and inspect how they are used in the "file method". The modules/Makefile.am contains a list of related sources in the libfile_la_SOURCES variable.
- Possibly the hardest part: write the required stuff (modules/kqueue*) to add kqueue support.
- Contact: tech-userlevel, tech-kern
- Duration estimate: 3 months
The existing puffs protocol gives a way to forward kernel-level file system actions to a userspace process. This project generalizes that protocol to allow forwarding file system actions arbitrarily across a network. This will make it possible to mount any kernel file system type from any location on the network, given a suitable arrangement of components.
The file system components to be used are puffs and rump. puffs is used to forward local file system requests from the kernel to userspace and rump is used to facilitate running the kernel file system in userspace as a service daemon.
The milestones are the following:
Write the necessary code to be able to forward requests from one source to another. This involves most likely reworking a bit of the libpuffs option parsing code and creating a puffs client (say, mount_puffs) to be able to forward requests from one location to another. The puffs protocol should be extended to include the necessary new features or a new protocol invented.
Proof-of-concept code for this has already been written. (Where is it?)
Currently the puffs protocol used for communication between the kernel and userland is machine dependent. To facilitate forwarding the protocol to remote hosts, a machine independent version must be specified.
To be able to handle multiple clients, the file systems must be converted to daemons instead of being utilities. This will also, in the case of kernel file system servers, include adding locking to the communication protocol.
The end result will look something like this:
# start serving ffs from /dev/wd0a on port 12675
onehost> ffs_serv -p 12675 /dev/wd0a
# start serving cd9660 from /dev/cd0a on port 12676
onehost> cd9660_serv -p 12675 /dev/cd0a
# meanwhile in outer space, mount anotherhost from port 12675
anotherhost> mount_puffs -t tcp onehost:12675 /mnt
anotherhost> mount
...
anotherhost:12675 on /mnt type <negotiated>
...
anotherhost> cd /mnt
anotherhost> ls
... etc
The implementor should have some familiarity with file systems and network services.
Any proposal should include answers to at least the following questions:
How is this different from NFS?
How is the protocol different from 9p?
How is this scheme going to handle the hard things that NFS doesn't do very well, such as distributed cache consistency?
Given industry trends, why is this project proposing a new protocol instead of conforming to the SOA model?
- Contact: tech-security
- Mentors: Alistair G. Crooks, David Holland
- Duration estimate: 3 months
IMPORTANT: This project was completed by Przemyslaw Sierocinski. You may still contact the people above for details, but please do not submit an application for this project.
This project requires the implementation of a new mount option, and a new system and user file system flag, which, when set, will write random data over file system blocks before they are to be deleted. In the NetBSD kernel, this will take place at the time of the last unlink of a file, and when ftruncate is called.
The project will involve the investigation of retrieving or generating random data within the kernel, along with research into ways of retrieving large amounts of low-quality random data, such as LSFR, Mersenne twisters, and PRNGs. As well as implementing the file system flags within the kernel, user-level programs and library functions which manipulate the flags will need to be modified. Documentation detailing the new functionality must also be provided.
- Contact: tech-userlevel
- Mentors: Christos Zoulas
- Duration estimate: 350h
Design and implement a mechanism that allows for fast user level access to kernel time data structures for NetBSD. For certain types of small data structures the system call overhead is significant. This is especially true for frequently invoked system calls like clock_gettime(2) and gettimeofday(2). With the availability of user level readable high frequency counters it is possible to create fast implementations for precision time reading. Optimizing clock_gettime(2) and alike will reduce the strain from applications frequently calling these system calls and improves timing information quality for applications like NTP. The implementation would be based on a to-be-modified version of the timecounters implementation in NetBSD.
Milestones:
- Produce optimizations for clock_gettime
- Produce optimizations for gettimeofday
- Show benchmarks before and after
- start evolving timecounters in NetBSD, demonstrating your improvements
See also the Paper on Timecounters by Poul-Henning Kamp.
- Contact: tech-userlevel
- Mentors: David Young
- Duration estimate: 3 months
Write a program that can read an email, infer quoting conventions, discern bottom-posted emails from top-posted emails, and rewrite the email using the conventions that the reader prefers. Then, take it a step further: write a program that can distill an entire email discussion thread into one document where every writer's contribution is properly attributed and appears in its proper place with respect to interleaved quotations.
- Contact: tech-kern
- Duration estimate: 3 months
Make NetBSD behave gracefully when a "live" USB/FireWire disk drive is accidentally detached and re-attached by, for example, creating a virtual block device that receives block-read/write commands on behalf of the underlying disk driver.
This device will delegate reads and writes to the disk driver, but it will keep a list of commands that are "outstanding," that is, reads that the disk driver has not completed, and writes that have not "hit the platter," so to speak.
Milestones:
- Provide a character device for userland to read indications that a disk in use was abruptly detached.
- Following disk re-attachment, the virtual block device replays its list of outstanding commands. A correct solution will not replay commands to the wrong disk if the removable was replaced instead of re-attached.
Open questions: Prior art? Isn't this how the Amiga worked? How will this interact with mount/unmount—is there a use-count on devices? Can you leverage "wedges" in your solution? Does any/most/all removable storage indicate reliably when a block written has actually reached the medium?
pkgsrc duplicates NetBSD efforts in maintaining X.org packages. pkgsrc X11, being able to cross-build, could replace base X11 distribution.
The latter decoupling should simplify maintainance of software; updating X11 and associated software becomes easier because of pkgsrc's shorter release cycle and more volatility.
Cross-buildable and cross-installable pkgsrc tools could also simplify maintainance of slower systems by utilising power of faster ones.
The goal of this project is to make it possible to bootstrap pkgsrc using available cross-build tools (e.g. NetBSD's).
This project requires good understanding of cross-development, some knowledge of NetBSD build process or ability to create cross-development toolchain, and familiarity with pkgsrc bootstrapping.
Note: basic infrastructure for this exists as part of various previous GSoC projects. General testing is lacking.
- Contact: tech-pkg
- Duration estimate: 3 months
The goal of this project is to generate a package or packages that will set up a cross-compiling environment for one (slow) NetBSD architecture on another (fast) NetBSD architecture, starting with (and using) the NetBSD toolchain in src.
The package will require a checked out NetBSD src tree (or as a refinement, parts thereof) and is supposed to generate the necessary cross-building tools using src/build.sh
with appropriate flags; for the necessary /usr/include
and libraries of the slow architecture, build these to fit or use the slow architecture's base.tgz
and comp.tgz
(or parts thereof). As an end result, you should e.g. be able to install the binary cross/pkgsrc-NetBSD-amd64-to-atari
package on your fast amd64 and start building packages for your slow atari system.
Use available packages, like eg pkgtools/pkg_comp
to build the cross-compiling environment where feasible.
As test target for the cross-compiling environment, use pkgtools/pkg_install
, which is readily cross-compilable by itself.
If time permits, test and fix cross-compiling pkgsrc X.org, which was made cross-compilable in an earlier GSoC project, but may have suffered from feature rot since actually cross-compiling it has been too cumbersome for it to see sufficient use.
- Contact: tech-userlevel
- Mentors: David Young
- Duration estimate: 3 months
Write a script that aids refactoring C programs by extracting subroutines from fragments of C code.
Do not reinvent the wheel: wherever possible, use existing technology for the parsing and comprehension of C code. Look at projects such as sparse and Coccinelle.
Your script should work as a filter that puts the free variables at the top, encloses the rest in curly braces, does something helpful with break, continue, and return statements.
That's just a start.
This project is tagged "hard" because it's not clearly specified. The first step (like with many projects) is to work out the specification in more detail.
- Contact: tech-kern, tech-userlevel
- Duration estimate: 3 months
Currently, the puffs(3) interface between the kernel and userspace uses various system structures for passing information. Examples are struct stat
and struct uucred
. If these change in layout (such as with the time_t size change in NetBSD 6.0), old puffs servers must be recompiled.
The project milestones are:
- define a binary-independent protocol
- implement support
- measure the performance difference with direct kernel struct passing
- if there is a huge difference, investigate the possibility for having both an internal and external protocol. The actual decision to include support will be made on the relative complexity of the code for dual support.
While this project will be partially implemented in the kernel, it is fairly well-contained and prior kernel experience is not necessary.
If there is time and interest, a suggested subproject is making sure that p2k(3) does not suffer from similar issues. This is a required subproject if dual support as mentioned above is not necessary.
- Contact: tech-userlevel
- Mentors: David Young
- Duration estimate: 3 months
Refactor utilities in the base system, such as netstat, top, and vmstat, that format & display tables of statistics.
One possible refactoring divides each program into three:
- one program reads the statistics from the kernel and writes them in machine-readable form to its standard output
- a second program reads machine-readable tables from its standard input and writes them in a human-readable format to the standard output
- and a third program supervises a pipeline of the first two programs, browses and refreshes a table.
Several utilities will share the second and third program.
- Contact: port-xen
- Mentors: Jean-Yves Migeon, Jan Schaumann, Adam Hamsik
- Duration estimate: 3 months
IMPORTANT: This project was completed by riz. You may still contact the people above for details, but please do not submit an application for this project.
Amazon Web Services offers virtual hosts on demand via its Amazon Elastic Comput Cloud (or EC2). Users can run a number of operating systems via so-called "Amazon Machine Images" (AMIs), currently including various Linux flavors and OpenSolaris, but not NetBSD.
This project would entail developing a NetBSD "AMI" and providing an easy step-by-step guide to create custom NetBSD AMIs. The difficulty of this is estimated to not be very high, as prior work in this direction exists (instructions for creating a FreeBSD AMI, for example, are available). Depending on the progress, we may adjust the final goals -- possibilities include adding necessary (though optional) glue to the release process to create official AMIs for each release or to provide images with common applications pre-installed.
See also: this thread on the port-xen mailing list as well as this page.
- Contact: tech-userlevel
- Mentors: Jörg Sonnenberger
- Duration estimate: 3 months
IMPORTANT: This project was completed by Abhinav Upadhyay. You may still contact the people above for details, but please do not submit an application for this project.
NetBSD ships a lot of useful documentation in the form of manual pages. Finding the right manual page can be difficult though. If you look for a library function, it will sometimes fail, because it is part of a larger manual page and doesn't have a MLINKS entry. If you look for a program, but don't know the exact name, it can be hard to find as well.
Historically, the content of the NAME section of each manual page has been extracted and put into a special file. The apropos command has been used to search this file based on keywords. This brings severe limitations as it restricts the list of potential matches significantly and requires very good descriptions of the content of a manual page in typically one line.
The goal of this project is to provide a modern replacement based on the Full Text Search of SqLite. The most basic version of the new apropos builds an index from the text output of mandoc and queries it using appropriate SQL syntax. Some basic form of position indications should be provided as well (e.g. line number).
A more advanced version could use the mandoc parser directly too. This would easily allow relatively precise position marks for the HTML version of manual pages. It would also allow weighting the context of a word. Consider Google's preference of URLs that contain the keywords or documents containing them in the head lines as an example.
Another idea is to use the index for directly handling manual page aliases. This could replace the symbolic links currently used via the MLINKS mechanism. The aliases can be derived automatically from the .Nm macros in the manual page.
- Contact: tech-userlevel
- Duration estimate: 3 months
IMPORTANT: This project was completed by utkarsh009. You may still contact the people above for details, but please do not submit an application for this project.
Anita is a tool for automated testing of NetBSD. Anita automates the process of downloading a NetBSD distribution, installing it into a fresh virtual machine and running the ATF tests in the distribution in a fully-automated manner. Originally, the main focus of Anita was on testing the sysinst installation procedure and on finding regressions that cause the system to fail to install or boot, but Anita is now used as the canonical platform for running the ATF test suite distributed with NetBSD. (You can see the results of such tests in the Test Run Logs page.)
At the moment, Anita supports qemu and Xen as the system to create the virtual machine with. qemu gives us the ability to test several ports of NetBSD (because qemu emulates many different architectures), but qemu is very slow because it lacks hardware virtualization support in NetBSD. The goal of this project is to extend Anita to support other virtual machine systems.
There are many virtual machine systems out there, but this project focuses on adding support to at least VirtualBox, because it is the canonical free virtual machine system for workstation setups. Another possibility would be adding support for SIMH for running NetBSD/vax.
The obvious deliverable is a new version of Anita that can use any of the mentioned virtual machines at run time, without having to be rebuilt, and the updated pkgsrc packages to install such updated version.
The candidate is supposed to know Python and be familiar with at least one virtual machine system.
This page includes a brainstorming of ideas for project proposals. While most, if not all, of the projects in this page are good ideas to propose to potential applicants, the ideas themselves have not received enough attention to write up a project specification for them. Any project considered good enough to be thrown at applicants should get a page of its own within projects/project/* and get a well-defined write-up explaining what the project is about, why it is important for NetBSD, and what is expected from the applicant.
Kernel Projects
related to (2) - rewrite sysctl to be less difficult to use
Implement SYN cookies, stretch goal - extended syn cookies with timestamp usage
difficult a QoS implementation for NetBSD in the year 2011
Debug, and merge the iscsi initiator - stretch goal - get rid of the weird portal stuff which we don't need
update linux emulation
Update to what it was updated not long time ago to 2.6.18 kernel.
Indeed, and RHEL 5 uses a modified 2.6.18 kernel. But RHEL 5 has been available since March 15, 2007, and is due to be "End of Production 1" in Q4 2011 -- agcdifficult investigate ffs dir hashing, and fix related bugs (hard)
I implemented a debugged generic dir hashing. It could be extended to also allow FFS, but FFS's directory stuff is all over the place and i fear to break it. Also FFS directory packing needs attention. -- Reinoud
difficult single instance storage and de-duplicating (might be too long?)
This might be possible to implement on top of device-mapper driver.
I was thinking that Single Instance Storage and de-duplication really needs access to userspace, and is best implemented there -- agcdifficult get some form of jails up and running
Maybe restoring MULT to working state is an option here, another possibility is to get gaols working. Some fresh approach to jails might be implemented on RUMP because it basically allow us to create in process virtual machine and run them independently.
add shred/wiping and memtest options to boot menu
add ability to store entropy at shutdown time, and to boot and restore same entropy. at same time, add a monitor (chi2?) for randomness over small sample sizes to /dev/random. investigate other ways of generating pseudo random numbers - 2 mersenne twisters sampling at different speeds, and throwing away random interim values due to another re-keyed prng or better. i.e. beef up our random number generation.
find a method to boot a boot.cfg entry once (and then fall back to the default method). This requires, especially, that the boot flip back to the default method if the once-off fails.
difficult Write new DTrace provider for watching locks(lockdebug), userspace or any other from available ones http://wikis.sun.com/display/DTrace/Providers
Change keyboard drivers to emit USB scancodes in event mode so for example ADB or Sun keyboards can coexist with things like USB keypads on the same mux and we don't need a separate InputDevice section in xorg.conf for each. This should be relatively easy.
Port FreeBSD's DAHDI implementation to NetBSD, so that Asterisk on NetBSD will have hardware support. FreeBSD's DAHDI implementation is SMP, so this isn't a simple port. Also, just about every FreeBSD kernel API related to device drivers and modules are different from the NetBSD equivalent meaning that one needs to basically know both kernels.
Merge all PC style floppy drivers to be similar to other "portable" drivers that use a bus frontend with a common chip backend (or possibly two: pseudo DMA and real DMA).
Create a new HID framework so that we can merge the USB and Bluetooth HID drivers, also allowing creation of drivers to properly handle devices such as Apple Magic Mouse which use several [undeclared] reports.
Improve tmpfs
Update zfs pool to version 5000.
Add MI interface for IEEE 802.3 Clause 45 MDIO access.
IEEE 802.3az(Energy Efficiency Ethernet(EEE)) support.
Add Support dot3 MIB.
Userspace Projects
a Java implementation we can use across all netbsd platforms
(controversial) a pcap-based port-knocking system, similar to pkgsrc/net/knock
puffs-based cvsfs (there's one out there which relies on extattrs)
fuse lowlevel
Wasn't this already done ?
ReFUSE does the upper layer of FUSE, and the excellent /dev/fuse does the device interface, but we are still lacking FUSE lowlevel functionality -- agcautomatic setlist generation - see my post to tech-userlevel a few months ago for the code i used (and which was used to generate all the libisns entries, btw).
Embedded World-Generator: a tool to build smaller distribution and using custom packages and software - aimed at the embedded market
Device-mapper RUMP testing: Write device-mapper testing framework based on libdm and RUMP this should be able to allow us to develop new targets easily.
RUMP ZFS testing: integrate and write ZFS testsuite for NetBSD.
Update userspace build of ddb (/sbin/crash) add more architectures and more functions to live/post mortem kernel debugging.
A new puffs(4) based auto-mount daemon which supports direct mounting (not via "/amd") and Solaris style auto-mounter maps.
port valgrind to NetBSD - valgrind is a code instrumentation framework that can be used to find memory related bugs in programs as well as conduct performance analysis. Valgrind is written for linux, and recently there is has been a MacOS/X port. The code is very much linux specific, and we should talk to the maintainers before starting work on it to find out if they will accept a NetBSD port, and also provide some assistance with the design of a portability layer. We should also do some research to see if we can re-use the FreeBSD port. For previous attempt see http://vg4nbsd.berlios.de/
Implement enough of libudev so that Gnome (KDE?) in pkgsrc can use it
Stack Backtrace Library - Write a library that the kernel debugger and return_address(9) can use to walk down the call stack. Use the technique that David Laight recommends: TBD.
Daily autotest system for ARM
pkgsrc projects
Modular libtool: rewrite libtool replacement that is faster and more convenient to maintain, so that one can add compilers to it without rebuilding the whole package.
Go back to previous pkgsrc install state: Write a tool to help with reverting the installation to some previous state. See pkg_tarup, pkg_chk.
Add support for multi-packages (building multiple binary packages from single source).
don't we do that already? (I guess this means: more info please -- spz
No, we don't. We don't produce more than one package during the single build.
Running VirtualBox on NetBSD as a host.
NetBSD participated successfully in the following Google's Summer of Code programs (see our results of 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023 )
This page contains a list of concrete suggestions for projects we would like to see applications for in the next Summer of Code. Note that they vary a lot in required skills and difficulty. We hope to get applications with a broad spectrum.
In addition, you may wish to discuss your proposal on IRC -- look for us on Libera.chat's #netbsd-code or for pkgsrc-related discussions, #pkgsrc. If you want to just meet the community, visit #netbsd.
We encourage you to come up with your own suggestions, if you cannot find a suitable project here. You can find more project ideas on the NetBSD projects page). These are not directly applicable to Summer-of-Code, but may serve as ideas for your own suggestions. You might find other ideas in src/doc/TODO and pkgsrc/doc/TODO.
Deadlines and directions for students' applications to the Google Summer-of-Code can be found on the Google pages.
Application process
To make the job of sorting out proposals and applications for NetBSD-related projects, e.g. in the Google Summer-of-Code, easier for us, there are a few questions that we would like to see answered.
If you are interested in working on any of the projects below, please contact the mailing list referenced on each item, and possibly answer as many questions from our project application guidelines as possible. The interested developers will be glad to respond to you there.
Please note that Google Summer-of-Code projects are a full (day-) time job.
A positive mid-term evaluation is only possible if usable code has been committed by that time. Make sure your schedule allows for this.
Kernel-level projects
Easy
Medium
- Convert a Wi-Fi driver to the new Wi-Fi stack (175h)
- ALTQ Refactoring and NPF Integration (350h)
- Test root device and root file system selection (350h)
- RFC 5927 countermeasures against IPv6 ICMP attacks on TCP
- auto create swap on memory pressure (175h)
- Merge code from two Realtek Wifi Drivers (175h)
- Userland PCI drivers (350h)
- Porting Raspberry Pi graphics -- VC4 DRM driver (350h)
- VMWare graphical acceleration (350h)
Hard
Userland projects
Easy
Medium
- Add UEFI boot options
- Audio visualizer for the NetBSD base system (350h)
- Light weight precision user level time reading (350h)
- Query optimizer for find(1) (350h)
- IKEv2 daemon for NetBSD (350h)
- Port launchd (350h)
- Add support for OpenCL and Vulkan to NetBSD xsrc (175h)
- Automatic tests for PAM
- Efficient Package Distribution
- SASL-C implementation for the OpenLDAP client (350h)
- Secure-PLT - supporting RELRO binaries (350h)
- Research and integrate the static code analyzers with the NetBSD codebase (350h)
- Sysinst alternative interface (350h)
Hard
Code Quality Improvement projects
Easy
Medium
Hard
pkgsrc projects
Easy
Medium
Hard
Other projects
Medium
Comments
We are trying to be fair; expect easy projects to require less knowledge and skill, but quite a bit of work.
Medium and hard projects are hard enough to qualify as practical part of a master's thesis (it'll qualify as thesis topic if you can add sufficient quality theoretical parts). We had the honor to mentor several in past GSoCs. Talk to your adviser(s) if and how you can claim academic credit for the project you do with us.
We have not yet failed a student who worked hard and actually talked (and listened) to their mentors and the community. If unexpected roadblocks make your project goals too hard to reach in the time given, the goals can be re-negotiated. They will not be for rampant slacking, though.
What we expect from contributors (both GSoC students and generally) is that they cooperate, that they are able to communicate (this will mean some English skills, sorry; we will try to support with mentors speaking the same contributor language if that can be a problem), and that they meet a minimum of good manners towards other people on our lists and other venues. Note that being a specific color, gender, nationality, religion, etc is not listed: If you are willing and able to contribute in a constructive manner, you are welcome.
Traditionally, the NetBSD kernel code had been protected by a single, global lock. This lock ensured that, on a multiprocessor system, two different threads of execution did not access the kernel concurrently and thus simplified the internal design of the kernel. However, such design does not scale to multiprocessor machines because, effectively, the kernel is restricted to run on a single processor at any given time.
The NetBSD kernel has been modified to use fine grained locks in many of its different subsystems, achieving good performance on today's multiprocessor machines. Unfotunately, these changes have not yet been applied to the networking code, which remains protected by the single lock. In other words: NetBSD networking has evolved to work in a uniprocessor envionment; switching it to use fine-grained locked is a hard and complex problem.
This project is currently claimed
Funding
At this time, The NetBSD Foundation is accepting project specifications to remove the single networking lock. If you want to apply for this project, please send your proposal to the contact addresses listed above.
Due to the size of this project, your proposal does not need to cover everything to qualify for funding. We have attempted to split the work into smaller units, and you can submit funding applications for these smaller subtasks independently as long as the work you deliver fits in the grand order of this project. For example, you could send an application to make the network interfaces alone MP-friendly (see the work plan below).
What follows is a particular design proposal, extracted from an original text written by Matt Thomas. You may choose to work on this particular proposal or come up with your own.
Tentative specification
The future of NetBSD network infrastructure has to efficiently embrace two major design criteria: Symmetric Multi-Processing (SMP) and modularity. Other design considerations include not only supporting but taking advantage of the capability of newer network devices to do packet classification, payload splitting, and even full connection offload.
You can divide the network infrastructure into 5 major components:
- Interfaces (both real devices and pseudo-devices)
- Socket code
- Protocols
- Routing code
- mbuf code.
Part of the complexity is that, due to the monolithic nature of the kernel, each layer currently feels free to call any other layer. This makes designing a lock hierarchy difficult and likely to fail.
Part of the problem are asynchonous upcalls, among which include:
ifa->ifa_rtrequest
for route changes.pr_ctlinput
for interface events.
Another source of complexity is the large number of global variables scattered throughout the source files. This makes putting locks around them difficult.
Subtasks
The proposed solution presented here include the following tasks (in no particular order) to achieve the desired goals of SMP support and modularity:
- Lockless, atomic FIFO/LIFO queues
- Lockless, atomic and generic Radix/Patricia trees
- Fast protocol and port demultiplexing
- Implement per-interface interrupt handling
- Kernel continuations
- Lazy receive processing
- Separate nexthop cache from the routing table
- Make TCP syncache optional
- Virtual network stacks
Work plan
Aside from the list of tasks above, the work to be done for this project can be achieved by following these steps:
Move ARP out of the routing table. See the nexthop cache project.
Make the network interfaces MP, which are one of the few users of the big kernel lock left. This needs to support multiple receive and transmit queues to help reduce locking contention. This also includes changing more of the common interfaces to do what the
tsec
driver does (basically do everything with softints). This also needs to change the*_input
routines to use a table to do dispatch instead of the current switch code so domain can be dynamically loaded.Collect global variables in the IP/UDP/TCP protocols into structures. This helps the following items.
Make IPV4/ICMP/IGMP/REASS MP-friendly.
Make IPV6/ICMP/IGMP/ND MP-friendly.
Make TCP MP-friendly.
Make UDP MP-friendly.
Radical thoughts
You should also consider the following ideas:
LWPs in user space do not need a kernel stack
Those pages are only being used in case the an exception happens. Interrupts are probably going to their own dedicated stack. One could just keep a set of kernel stacks around. Each CPU has one, when a user exception happens, that stack is assigned to the current LWP and removed as the active CPU one. When that CPU next returns to user space, the kernel stack it was using is saved to be used for the next user exception. The idle lwp would just use the current kernel stack.
LWPs waiting for kernel condition shouldn't need a kernel stack
If an LWP is waiting on a kernel condition variable, it is expecting to be inactive for some time, possibly a long time. During this inactivity, it does not really need a kernel stack.
When the exception handler get an usermode exeception, it sets LWP
restartable flag that indicates that the exception is restartable, and then
services the exception as normal. As routines are called, they can clear
the LWP restartable flag as needed. When an LWP needs to block for a long
time, instead of calling cv_wait
, it could call cv_restart
. If
cv_restart
returned false, the LWPs restartable flag was clear so
cv_restart
acted just like cv_wait
. Otherwise, the LWP and CV would
have been tied together (big hand wave), the lock had been released and the
routine should have returned ERESTART
. cv_restart
could also wait for
a small amount of time like .5 second, and only if the timeout expires.
As the stack unwinds, eventually, it would return to the last the exception
handler. The exception would see the LWP has a bound CV, save the LWP's
user state into the PCB, set the LWP to sleeping, mark the lwp's stack as
idle, and call the scheduler to find more work. When called,
cpu_switchto
would notice the stack is marked idle, and detach it from
the LWP.
When the condition times out or is signalled, the first LWP attached to the
condition variable is marked runnable and detached from the CV. When the
cpu_switchto
routine is called, the it would notice the lack of a stack
so it would grab one, restore the trapframe, and reinvoke the exception
handler.
The projects listed in this page have been pre-approved for funding. If you choose to work on any of these projects, contact first the corresponding person or group and discuss the exact details of your work, the schedule and the compensation you expect.
List of funded projects
- Contact: tech-net
Enhance zeroconfd, the Multicast DNS daemon, that was begun in NetBSD's Google Summer of Code 2005 (see work in progress: http://netbsd-soc.sourceforge.net/projects/zeroconf/). Develop a client library that lets a process publish mDNS records and receive asynchronous notification of new mDNS records. Adapt zeroconfd to use event(3) and queue(3). Demonstrate comparable functionality to the GPL or APSL alternatives (Avahi, Howl, ...), but in a smaller storage footprint, with no dependencies outside of the NetBSD base system.
- Contact: tech-kern
- Duration estimate: 1 year for port; 3 years for rewrite by one developer
Implement a BSD licensed XFS. A GPL licensed implementation of XFS is available at http://oss.sgi.com/projects/xfs/.
Alternatively, or additionally, it might be worthwhile to do a port of the GPL code and allow it to run as a kernel module.
See also FreeBSD's port.
- Contact: port-xen
- Mentors: Jean-Yves Migeon
- Duration estimate: 3-6 months, depending on subsystem considered
This project's work is composed of smaller components that can be worked on independently from others, all related to the Xen port.
Xen has support of a number of machine-dependent features that NetBSD currently does not implement for the x86's port of the Xen architecture. Notably:
- PCI passthrough, where PCI devices can be exposed to a guest via Xen protected mem/regs mappings;
- IOMMU (Intel's VT-d, or AMD's IOV) that protects memory access from devices for I/O, needed for safe operation of PCI/device passthrough;
- ACPI, and more specifically, ACPI states. Most commonly used on native systems to suspend/resume/shutdown a host;
- CPU and memory hotplugging;
- more elaborate VM debugging through gdbx, a lightweight debugger included with Xen.
The purpose of this project is to either add the missing parts inside NetBSD (some requiring native implementation first like IOMMU), or implement the needed interface to plug current native x86 systems (like pmf(9) for ACPI hypercalls).
- Contact: port-amd64, port-i386
- Mentors: Jean-Yves Migeon
- Duration estimate: 6 months
This project is about implementing the needed support for Intel's VT-d and AMD-IOV functionality in the native x86 ports, with a focus on amd64 first (i386 being a nice-to-have, but not strictly required).
NetBSD already has machine-independent bus abstraction layers (namely, bus_space(9) for bus-related memory operations, and bus_dma(9) for DMA related transactions) that are successfully used on other arches like SPARC for IOMMU.
The present project is to implement the machine-dependent functions to support IOMMU on x86.
Please note that it requires specific hardware for testing, as not all motherboards/chipsets have IOMMU supported let alone working correctly. In case of doubt, ask on the mailing list or point of contact.
- Contact: netbsd-docs
The NetBSD website building infrastructure is rather complex and requires significant resources. We need to make it easier for anybody to contribute without having to install a large number of complex applications from pkgsrc or without having to learn the intricacies of the build process.
A more detailed description of the problem is described in this and this email and the following discussion on the netbsd-docs.
This work requires knowledge of XML, XSLT and make. This is not a request for visual redesign of the website.
- Contact: tech-net
Add socket options to NetBSD for controlling WLAN transmit parameters like transmit power, fragmentation threshold, RTS/CTS threshold, bitrate, 802.11e access category, on a per-socket and per-packet basis. To set transmit parameters, pass radiotap headers using sendmsg(2) and setsockopt(2).
This project is on hold due to the conversion project needing to be completed first.
- Contact: tech-net, tech-userlevel
- Duration estimate: 1 month
Create an easy to use wifi setup widget for NetBSD: browse and select networks in the vicinity by SSID, BSSID, channel, etc.
The guts should probably be done as a library so that user interfaces of increasing slickness can be built on top of it as desired. (That is: there ought to be some form of this in base; but a nice looking gtk interface version would be good to have as well.)
- Contact: tech-userlevel
- Duration estimate: 1-2 months
Due to the multitude of supported machine architectures NetBSD has to deal with many different partitioning schemes. To deal with them in a uniform way (without imposing artificial restrictions that are not enforced by the underlying firmware or bootloader partitioning scheme) wedges have been designed.
While the kernel part of wedges is mostly done (and missing parts are easy to add), a userland tool to edit wedges and to synthesize defaults from (machine/arch dependent) on-disk content is needed.
- Contact: tech-userlevel
While we now have mandoc for handling man pages, we currently still need groff in the tree to handle miscellaneous docs that are not man pages.
This is itself an inadequate solution as the groff we have does not support PDF output (which in this day and age is highly desirable) ... and while newer groff does support PDF output it does so via a Perl script. Also, importing a newer groff is problematic for assorted other reasons.
We need a way to typeset miscellaneous articles that we can import into base and that ideally is BSD licensed. (And that can produce PDFs.) Currently it looks like there are three decent ways forward:
Design a new roff macro package that's comparable to mdoc (e.g. supports semantic markup) but is for miscellaneous articles rather than man pages, then teach mandoc to handle it.
Design a new set of markup tags comparable to mdoc (e.g. supports semantic markup) but for miscellaneous articles, and a different less ratty syntax for it, then teach mandoc to handle this.
Design a new set of markup tags comparable to mdoc (e.g. supports semantic markup) but for miscellaneous articles, and a different less ratty syntax for it, and write a new program akin to mandoc to handle it.
These are all difficult and a lot of work, and in the case of new syntax are bound to cause a lot of shouting and stamping. Also, many of the miscellaneous documents use various roff preprocessors and it isn't clear how much of this mandoc can handle.
None of these options is particularly appealing.
There are also some less decent ways forward:
Pick one of the existing roff macro packages for miscellaneous articles (ms, me, ...) and teach mandoc to handle it. Unfortunately all of these macro packages are pretty ratty, they're underpowered compared to mdoc, and none of them support semantic markup.
Track down one of the other older roff implementations, that are now probably more or less free (e.g. ditroff), then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, any of these programs are likely to be old nasty code that needs a lot of work.
Teach the groff we have how to emit PDFs, then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, this will likely be pretty nasty work and it's still got the wrong license.
Rewrite groff as BSD-licensed code and provide support for generating PDFs, then stick to the existing roff macro packages as above. In addition to the drawbacks cited above, this is a horrific amount of work.
Try to make something else do what we want. Unfortunately, TeX is a nonstarter and the only other halfway realistic candidate is lout... which is GPLv3 and at least at casual inspection looks like a horrible mess of its own.
These options are even less appealing.
Maybe someone can think of a better idea. There are lots of choices if we give up on typeset output, but that doesn't seem like a good plan either.
- Contact: tech-kern
Add memory-efficient snapshots to tmpfs. A snapshot is a view of the filesystem, frozen at a particular point in time. The snapshotted filesystem is not frozen, only the view is. That is, you can continue to read/write/create/delete files in the snapshotted filesystem.
The interface to snapshots may resemble the interface to null mounts, e.g., 'mount -t snapshot /var/db /db-snapshot' makes a snapshot of /var/db/ at /db-snapshot/.
You should exploit features of the virtual memory system like copy-on-write memory pages to lazily make copies of files that appear both in a live tmpfs and a snapshot. This will help conserve memory.
- Contact: tech-install, tech-misc
syspkgs is the concept of using pkgsrc's pkg_* tools to maintain the base system. That is, allow users to register and components of the base system with more ease.
There has been a lot of work in this area already, but it has not yet been finalized. Which is a diplomatic way of saying that this project has been attempted repeatedly and failed every time.
- Contact: port-sparc, tech-ports
It would be nice to support these newer highly SMP processors from Sun. A Linux port already exists, and Sun has contributed code to the FOSS community.
(Some work has already been done and committed - see https://wiki.netbsd.org/ports/sparc64/sparc64sun4v/
- Contact: tech-kern
- Duration estimate: 8-12 months
Remove the residual geometry code and datastructures from FFS (keep some kind of allocation groups but without most of what cylinder groups now have) and replace blocks and fragments with extents, yielding a much simpler filesystem well suited for modern disks.
Note that this results in a different on-disk format and will need to be a different file system type.
The result would be potentially useful to other operating systems beyond just NetBSD, since UFS/FFS is used in so many different kernels.
- Contact: tech-kern
Certain real time chips, and other related power hardware, have a facility within them to allow the kernel to set a specific time and date at which time the machine will power itself on. One such chip is the DS1685 RTC. A kernel API should be developed to allow such devices to have a power-on-time set from userland. Additionally, the API should be made available through a userland program, or added to an existing utility, such as halt(8).
It may also be useful to make this a more generic interface, to allow for configuring other devices, such as Wake-On-Lan ethernet chips, to turn them on/off, set them up, etc.
- Contact: port-mips
NetBSD/sgimips currently runs on O2s with R10k (or similar) CPUs, but for example speculative loads are not handled correctly. It is unclear if this is pure kernel work or the toolchain needs to be changed too.
Currently softfloat is used, and bugs seem to exist in the hardware float support. Resolving these bugs and switching to hardware float would improve performance.
See also NetBSD/sgimips.
- Contact: port-mips
NetBSD/sgimips currently runs on a number of SGI hardware, but support for IP27 (Origin) and IP30 (Octane) is not yet available.
See also NetBSD/sgimips.
- Contact: port-mips
Currently booting a sgimips machine requires different boot commands depending on the architecture. It is not possible to use the firmware menu to boot from CD.
An improved primary bootstrap should ask the firmware for architecture detail, and automatically boot the correct kernel for the current architecture by default.
A secondary objective of this project would be to rearrange the generation of a bootably CD image so that it could just be loaded from the firmware menu without going through the command monitor.
- Contact: tech-net
Create/modify a 802.11 link-adaptation module that works at least as well as SampleRate, but is generic enough to be re-used by device drivers for ADMtek, Atheros, Intel, Ralink, and Realtek 802.11 chips. Make the module export a link-quality metric (such as ETT) suitable for use in linkstate routing. The SampleRate module in the Atheros device driver sources may be a suitable starting point.
- Contact: tech-userlevel
- Duration estimate: 2 months
IMPORTANT: This project was completed by Kritaps Dzonsons at openbsd. You may still contact the people above for details, but please do not submit an application for this project.
Create a BSD licensed drop-in replacement for rsync that can handle large numbers of files/directories and large files efficiently. The idea would be to not scan the filesystem for every client to detect changes that need transfer, but rather maintain some database that can be queried (and that also needs updating when files are changed on the server). See supservers(8) for some ideas of how this could work. Compatibility with existing rsync clients should be retained.
- Contact: tech-net
- Duration estimate: 2 months
Write tests for the routing code and re-factor. Use more and better-named variables.
PCBs and other structures are sprinkled with route caches (struct route). Sometimes they do not get updated when they should. It's necessary to modify rtalloc(), at least. Fix that. Note XXX in rtalloc(); this may mark a potential memory leak!
- Contact: tech-net
- Duration estimate: 3 months
Implement the ability to route based on properties like QoS label, source address, etc.
- Contact: tech-kern
The policy code in the kernel that controls file caching and readahead behavior is necessarily one-size-fits-all, and the knobs available to applications to tune it, like madvise() and posix_fadvise(), are fairly blunt hammers. Furthermore, it has been shown that the overhead from user<->kernel domain crossings makes syscall-driven fine-grained policy control ineffective. (Though, that was shown in the past when processors were much slower relative to disks and it may not be true any more.)
Is it possible to use BPF, or create a different BPF-like tool (that is, a small code generator with very simple and very clear safety properties) to allow safe in-kernel fine-grained policy control?
Caution: this is a research project.
- Contact: tech-pkg
IMPORTANT: This project was completed by joerg. You may still contact the people above for details, but please do not submit an application for this project.
To create packages that are usable by anyone, pkgsrc currently requires that packages be built with superuser privileges. It is already possible to use pkgsrc in great parts without such privileges, but there haven't been many thoughts about how the resulting binary packages should be specified. For example, many packages don't care at all about the owner/group of their files, as long as they are not publicly overwritable. In the end, the binary packages should be as independent from the build environment as possible.
For more information about the current state, see the How to use pkgsrc as non-root section in the pkgsrc guide, Jörg's mail on DESTDIR support as well as pkgsrc/mk/unprivileged.mk.
- Contact: tech-misc, tech-ports
- Duration estimate: 4 months
NetBSD currently requires a system with an MMU. This obviously limits the portability. We'd be interested in an implementation/port of NetBSD on/to an MMU-less system.
- Contact: tech-userlevel
Apply statistical AI techniques to the problem of monitoring the logs of a busy system. Can one identify events of interest to a sysadmin, or events that merit closer inspection? Failing that, can one at least identify some events as routine and provide a filtered log that excludes them? Also, can one group a collection of related messages together into a single event?
- Contact: netbsd-users, tech-install
While NetBSD has had LiveCDs for a while, there has not yet been a LiveCD that allows users to install NetBSD after test-driving it. A LiveCD that contains a GUI based installer and reliably detects the platforms features would be very useful.
- Contact: tech-pkg, port-xen
- Mentors: Jean-Yves Migeon
- Duration estimate: 1-3 weeks, depending on targetted operating system
Libvirt is a project that aims at bringing yet-another-level of abstraction to the management of different virtualization technologies; it supports a wide range of virtualization technologies, like Xen, VMWare, KVM and containers.
A package for libvirt was added to pkgsrc under sysutils/libvirt, however it requires more testing before all platforms supported by pkgsrc can also seamlessly support libvirt.
The purpose of this project is to investigate what is missing in libvirt (in terms of patches or system integration) so it can work out-of-the-box for platforms that can benefit from it. GNU/Linux, NetBSD, FreeBSD and Solaris are the main targets.
- Contact: tech-net
Improve on the Kismet design and implementation in a Kismet replacement for BSD.
- Contact: tech-kern, David Holland
- Duration estimate: 2-3 months
kernfs is a virtual file system that reports information about the running system, and in some cases allows adjusting this information. procfs is a virtual file system that provides information about currently running processes. Both of these file systems work by exposing virtual files containing textual data.
The current implementations of these file systems are redundant and both are non-extensible. For example, kernfs is a hardcoded table that always exposes the same set of files; there is no way to add or remove entries on the fly, and even adding new static entries is a nuisance. procfs is similarly limited; there is no way to add additional per-process data on the fly. Furthermore, the current code is not modular, not well designed, and has been a source of security bugs in the past.
We would like to have a new implementation for both of these file systems that rectifies these problems and others, as outlined below:
kernfs and procfs should share most of their code, and in particular they should share all the code for managing lists of virtual files. They should remain separate entities, however, at least from the user perspective: community consensus is that mixing system and per-process data, as Linux always has, is ugly.
It should be possible to add and remove entries on the fly, e.g. as modules are loaded and unloaded.
Because userlevel programs can become dependent on the format of the virtual files (Linux has historically had compatibility problems because of this) they should if possible not have complex formats at all, and if they do the format should be clearly specifiable in some way that isn't procedural code. (This makes it easier to reason about, and harder for it to get changed by accident.)
There is an additional interface in the kernel for retrieving and adjusting arbitrary kernel information: sysctl. Currently the sysctl code is a third completely separate mechanism, on many points redundant with kernfs and/or procfs. It is somewhat less primitive, but the current implementation is cumbersome and not especially liked. Integrating kernfs and procfs with sysctl (perhaps somewhat like the Linux sysfs) is not automatically the right design choice, but it is likely to be a good idea. At a minimum we would like to be able to have one way to handle reportable/adjustable data within the kernel, so that kernfs, procfs, and/or sysctl can be attached to any particular data element as desired.
While most of the implementations of things like procfs and sysctl found in the wild (including the ones we currently have) work by attaching callbacks, and then writing code all over the kernel to implement the callback API, it is possible to design instead to attach data, that is, pointers to variables within the kernel, so that the kernfs/procfs or sysctl code itself takes responsibility for fetching that data. Please consider such a design strongly and pursue it if feasible, as it is much tidier. (Note that attaching data also in general requires specifying a locking model and, for writeable data, possibly a condition variable to signal on when the value changes and/or a mechanism for checking new values for validity.)
It is possible that using tmpfs as a backend for kernfs and procfs, or sharing some code with tmpfs, would simplify the implementation. It also might not. Consider this possibility, and assess the tradeoffs; do not treat it as a requirement.
Alternatively, investigate FreeBSD's pseudofs and see if this could be a useful platform for this project and base for all the file systems mentioned above.
When working on this project, it is very important to write a complete regression test suite for procfs and kernfs beforehand to ensure that the rewrites do not create incompatibilities.
- Contact: tech-kern
- Mentors: Jean-Yves Migeon
Today a number of OS provide some form of kernel-level virtualization that offer better isolation mechanisms that the traditional (yet more portable) &chroot(2). Currently, NetBSD lacks functionality in this field; there have been multiple attempts (gaols, mult) to implement a jails-like system, but none so far has been integrated in base.
The purpose of this project is to study the various implementations found elsewhere (FreeBSD Jails, Solaris Zones, Linux Containers/VServers, ...), and eventually see their plus/minus points. An additional step would be to see how this can be implemented the various architectural improvements NetBSD gained, especially rump(3) and kauth(9).
Caution: this is a research project.
- Contact: tech-kern
- Duration estimate: 1 year for port; 3 years for rewrite by one developer
Implement a BSD licensed JFS. A GPL licensed implementation of JFS is available at http://jfs.sourceforge.net/.
Alternatively, or additionally, it might be worthwhile to do a port of the GPL code and allow it to run as a kernel module.
- Contact: tech-kern
Add support for Apple's extensions to ISO9660 to makefs, especially the ability to label files with Type & Creator IDs. See http://developer.apple.com/technotes/fl/fl_36.html.
- Contact: tech-kern
- Duration estimate: 4 months and up
There are many caches in the kernel. Most of these have knobs and adjustments, some exposed and some not, for sizing and writeback rate and flush behavior and assorted other voodoo, and most of the ones that aren't adjustable probably should be.
Currently all or nearly all of these caches operate on autopilot independent of the others, which does not necessarily produce good results, especially if the system is operating in a performance regime different from when the behavior was tuned by the implementors.
It would be nice if all these caches were instead coordinated, so that they don't end up fighting with one another. Integrated control of sizing, for example, would allow explicitly maintaining a sensible balance between different memory uses based on current conditions; right now you might get that, depending on whether the available voodoo happens to work adequately under the workload you have, or you might not. Also, it is probably possible to define some simple rules about eviction, like not evicting vnodes that have UVM pages still to be written out, that can help avoid unnecessary thrashing and other adverse dynamic behavior. And similarly, it is probably possible to prefetch some caches based on activity in others. It might even be possible to come up with one glorious unified cache management algorithm.
Also note that cache eviction and prefetching is fundamentally a form of scheduling, so all of this material should also be integrated with the process scheduler to allow it to make more informed decisions.
This is a nontrivial undertaking.
Step 1 is to just find all the things in the kernel that ought to participate in a coordinated caching and scheduling scheme. This should not take all that long. Some examples include:
- UVM pages
- file system metadata buffers
- VFS name cache
- vnode cache
- size of the mbuf pool
Step 2 is to restructure and connect things up so that it is readily possible to get the necessary information from all the random places in the kernel that these things occupy, without making a horrible mess and without trashing system performance in the process or deadlocking out the wazoo. This is not going to be particularly easy or fast.
Step 3 is to take some simple steps, like suggested above, to do something useful with the coordinated information, and hopefully to show via benchmarks that it has some benefit.
Step 4 is to look into more elaborate algorithms for unified control of everything. The previous version of this project cited IBM's ARC ("Adaptive Replacement Cache") as one thing to look at. (But note that ARC may be encumbered -- someone please check on that and update this page.) Another possibility is to deploy machine learning algorithms to look for and exploit patterns.
Note: this is a serious research project. Step 3 will yield a publishable minor paper; step 4 will yield a publishable major paper if you manage to come up with something that works, and it quite possibly contains enough material for a PhD thesis.
- Contact: tech-userlevel
Use puffs or refuse to write an imapfs that you can mount on /var/mail, either by writing a new one or porting the old existing Plan 9 code that does this.
Note: there might be existing solutions, please check upfront and let us know.
- Contact: tech-kern, tech-embed
Implement a flash translation layer.
A flash translation layer does block remapping, translating from visible block addresses used by a file system to physical cells on one or more flash chips. This provides wear leveling, which is essential for effective use of flash, and also typically some amount of read caching and write buffering. (And it takes care of excluding cells that have gone bad.)
This allows FFS, LFS, msdosfs, or whatever other conventional file system to be used on raw flash chips. (Note that SSDs and USB flash drives and so forth contain their own FTLs.)
FTLs involve quite a bit of voodoo and there is a lot of prior art and research; do not just sit down and start coding.
There are also some research FTLs that we might be able to get the code for; it is probably worth looking into this.
Note that NAND flash and NOR flash are different and need different handling, and the various cell types and other variations also warrant different policy choices.
The degree of overprovisioning (that is, the ratio of the raw capacity of the flash chips to the advertised size of the resulting device) should be configurable as this is a critical factor for performance.
Making the device recoverable rather than munching itself in system crashes or power failures is a nice extra, although apparently the market considers this an optional feature for consumer devices.
The flash translation layer should probably be packaged a driver that attaches to one or more flash chips and provides a disk-type block/character device pair.
- Contact: tech-kern
- Duration estimate: 2-3 months
The ext2 file system is the lowest common denominator Unix-like file system in the Linux world, as ffs is in the BSD world. NetBSD has had kernel support for ext2 for quite some time.
However, the Linux world has moved on, with ext3 and now to some extent also ext4 superseding ext2 as the baseline. NetBSD has no support for ext3; the goal of this project is to implement that support.
Since ext3 is a backward-compatible extension that adds journaling to ext2, NetBSD can mount clean ext3 volumes as ext2 volumes. However, NetBSD cannot mount ext3 volumes with journaling and it cannot handle recovery for crashed volumes. As ext2 by itself provides no crash recovery guarantees whatsoever, this journaling support is highly desirable.
The ext3 support should be implemented by extending the existing ext2 support (which is in src/sys/ufs/ext2fs), not by rewriting the whole thing over from scratch. It is possible that some of the ostensibly filesystem-independent code that was added along with the ffs WAPBL journaling extensions might be also useable as part of an ext3 implementation; but it also might not be.
The full requirements for this project include complete support for ext3 in both the kernel and the userland tools. It is possible that a reduced version of this project with a clearly defined subset of these requirements could still be a viable GSOC project; if this appeals to you please coordinate with a prospective mentor. Be advised, however, that several past ext3-related GSOC projects have failed; it is a harder undertaking than you might think.
An additional useful add-on goal would be to audit the locking in the existing ext2 code; the ext2 code is not tagged MPSAFE, meaning it uses a biglock on multiprocessor machines, but it is likely that either it is in fact already safe and just needs to be tagged, or can be easily fixed. (Note that while this is not itself directly related to implementing ext3, auditing the existing ext2 code is a good way to become familiar with it.)
- Contact: tech-kern
In a file system with symlinks, the file system can be seen as a graph rather than a tree. The meaning of .. potentially becomes complicated in this environment.
There is a fairly substantial group of people, some of them big famous names, who think that the usual behavior (where crossing a symlink is different from entering a subdirectory) is a bug, and have made various efforts from time to time to "fix" it. One such fix can be seen in the -L and -P options to ksh's pwd.
Rob Pike implemented a neat hack for this in Plan 9. It is described in http://cm.bell-labs.com/sys/doc/lexnames.html. This project is to implement that logic for NetBSD.
Note however that there's another fairly substantial group of people, some of them also big famous names, who think that all of this is a load of dingo's kidneys, the existing behavior is correct, and changing it would be a bug. So it needs to be possible to switch the implementation on and off as per-process state.
- Contact: tech-embed
- Duration estimate: 2 months
NetBSD version of compressed cache system (for low-memory devices): http://linuxcompressed.sourceforge.net/.
- Contact: tech-kern
- Duration estimate: 2-3 months
Currently the buffer handling logic only sorts the buffer queue (aka disksort). In an ideal world it would be able to coalesce adjacent small requests, as this can produce considerable speedups. It might also be worthwhile to split large requests into smaller chunks on the fly as needed by hardware or lower-level software.
Note that the latter interacts nontrivially with the ongoing dynamic MAXPHYS project and might not be worthwhile. Coalescing adjacent small requests (up to some potentially arbitrary MAXPHYS limit) is worthwhile regardless, though.
- Contact: tech-toolchain
- Duration estimate: 2 months
BSD make (aka bmake) uses traditional suffix rules (.c.o: ...) instead of pattern rules like gmake's (%.c:%.o: ...) which are more general and flexible.
The suffix module should be re-written to work from a general match-and-transform function on targets, which is sufficient to implement not only traditional suffix rules and gmake pattern rules, but also whatever other more general transformation rule syntax comes down the pike next. Then the suffix rule syntax and pattern rule syntax can both be fed into this code.
Note that it is already possible to write rules where the source is computed explicitly based on the target, simply by using $(.TARGET) in the right hand side of the rule. Investigate whether this logic should be rewritten using the match-and-transform code, or if the match-and-transform code should use the logic that makes these rules possible instead.
Implementing pattern rules is widely desired in order to be able to read more makefiles written for gmake, even though gmake's pattern rules aren't well designed or particularly principled.
- Contact: tech-net
Modern 802.11 NICs provide two or more transmit descriptor rings, one for each priority level or 802.11e access category. Add to NetBSD a generic facility for placing a packet onto a different hardware transmit queue according to its classification by pf or IP Filter. Demonstrate this facility on more than one 802.11 chipset.
This project is on hold due to the conversion project needing to be completed first.
- Contact: tech-net
Design and program a scheme for extending the operating range of 802.11 networks by using techniques like frame combining and error-correcting codes to cope with low S/(N+I) ratio. Implement your scheme in one or two WLAN device drivers -- Atheros & Realtek, say.
This project is on hold due to the conversion project needing to be completed first.
If you plan to apply to work on any of the projects listed in this site, you should start by sending an email to the contact points listed at the top of every project description. The purpose of this email is to introduce yourself to the NetBSD community if you are new to it, and to explain in a great level of detail how you will work on the project (e.g. the overall plan, milestones, schedule, etc.) This is particularly important if the project is up for funding or if it is a Google Summer of Code project.
The sections below include a guideline on the kind of information we are interested in hearing in your initial communication. The level of detail involved will depend on how how familiar you are with the NetBSD Project and the developer community.
If you are a newcomer (e.g. you are a Google Summer of Code student that has just installed NetBSD for the first time), you are encouraged to answer as many questions as possible. If you are an old-timer, however, you can skip the most obvious questions and focus on preparing a detailed project specification instead.
About your project
What is the goal of the project? (Short overview)
What will be the deliverables of the project? (Code, documentation, ...)
Give an overview of how you intend to reach the project's goal in the form of milestones and a schedule.
Is similar software already available elsewhere, e.g. for Linux or any other BSD?
Is the project a port of software, or a rewrite? (remember: No GPL in the NetBSD kernel!)
About your project and NetBSD
If your working area is the core NetBSD operating system: have you installed NetBSD and made first experiences with hands-on configuration? Have you rebuilt the kernel and the userland, either in full or in parts? If you plan to work on pkgsrc, have you installed packages from source and binary? Have you created a package on your own?
Have you found the relevant places that your project is based on in the source code, and read through it?
How will your project integrate into NetBSD? (Userland tool, kernel subsystem, driver, patch set, pkgsrc, ...)
What interfaces in NetBSD will your project use? (Go into details here! What module/file names, functions, data structures etc. are of relevance for your project?)
To what degree are you familiar with those interfaces? (not/some/very, details?)
Is knowledge on other topics required for this project, e.g. on hardware, software other than NetBSD, APIs, protocols, etc.? If so, give details and references.
To what degree are you familiar with those? (not/some/very, details?)
If the project involves hardware (e.g. writing drivers, doing a port to new hardware, ...): do you own the hardware or have access to?
About you
Can you list some prior projects that you have worked on so far? Include details like programming language, duration, number of people involved, project goal, if you used CVS, SVN or similar, and whatever else we may find thrilling! If you have a CV/resume online, feel free to include a link.
Do you have any prior experience with programming NetBSD? In what area? If you did send some problem reports (PRs) or patches, please include references.
Have you previously discussed your project within NetBSD, either on a mailing list or with some specific developers? If so, please give us either the names/email addresses of those developers or point us towards the discussions on our list (via http://mail-index.NetBSD.org/).
How do we contact you for question, comments, suggestions, etc.?
Is there anything else you'd like us to know? Did we forget any important details or questions?
This is the archive of completed projects. Project proposals are preserved after they were completed for two reasons: first, to show that these project pages are useful and, second, to ensure that the URLs pointing to the projects remain valid.
- Add Argon2 password hashing
- Update web firewall documentation from ipfilter to npf
- Filesystem Fuzzing with Americal Fuzzy Lop
- Make Anita support additional virtual machine systems
- Apropos replacement based on mandoc and SQLite's FTS
- Create an SQL backend and statisticics/query page for ATF test results
- Lockless, atomic producer/consumer queues
- NetBSD/aws -- Bringing NetBSD to Amazon's Web Services
- Benchmark NetBSD
- Build the kernel as PIE
- Make /boot.cfg handling machine independent
- Emulating linux binaries on ARM64
- CVS Migration for NetBSD repos
- Automate handling of donations (350h)
- DRM 32bit and linux compat code
- Machine-independent EFI bootloader (for ARM)
- Implement file system flags to scrub data blocks before deletion
- fsck for UDF
- Add kqueue support to GIO
- Implement stable privacy addresses for IPv6
- Kernel Address SANitizer
- Socket option to timestamp UDP packets in the kernel
- Curses library automated testing
- Improve and extend libintl
- Live upgrade
- Make system(3) and popen(3) use posix_spawn(3) internally
- Move beyond TWM
- NetBSD/azure -- Bringing NetBSD to Microsoft Azure
- New automounter
- Web UI for NPF
- Parallelize page queues
- Add other package format(s) to pkgsrc
- Separate test depends for pkgsrc
- Unprivileged pkgsrc builds
- Improve GNOME support in pkgsrc
- Unify standard installation tasks
- TLS (HTTPS) support in net/tnftp
- Add support for chdir(2) support in posix_spawn(3)
- POSIX Test Suite Compliance
- Userspace file system and device driver code sharing
- Implement RFC 6056: 'Recommendations for Transport-Protocol Port Randomization'
- BSD licensed rsync replacement
- rumpkernel fuzzing (350h)
- Integrate SafeStack with the basesystem
- Scalable entropy gathering
- Proper locking in scsipi
- Revamped struct protosw
- Sysinst enhancements
- Add binary pkg install to sysinst
- System upgrade
- Fix PR 56086: Resume hangs when tpm(4) is enabled
- Adapt TriforceAFL for the NetBSD kernel fuzzing
- Add FFS support to U-Boot
- Make u-boot compilable on NetBSD
- Kernel Undefined Behavior SANitizer
- Building userland with sanitizers
- Make NetBSD a supported guest OS under VirtualBox
- Port Wine to amd64
- Add support for mapping userspace via SMAP/SMEP on newer x86 CPUs
- Kernel Module support for Xen
- xhci resume support
- Port XRay to the NetBSD kernel
- Finish ZFS
- ZFS root support (bootloader and mount_root)
This project proposal is a subtask of smp networking and is elegible for funding independently.
The goal of this project is to implement full virtual network stacks. A
virtual network stack collects all the global data for an instance of a
network stack (excluding AF_LOCAL
). This includes routing table, data
for multiple domains and their protocols, and the mutexes needed for
regulating access to it all. Instead, a brane is an instance of a
networking stack.
An interface belongs to a brane, as do processes. This can be considered
a chroot(2)
for networking, e.g. chbrane(2)
.
IMPORTANT: This project was completed by Tyler Retzlaff. You may still contact the people above for details, but please do not submit an application for this project.
The goal of this project is to split out obvious PR*_xxx
that should have
never been dispatched through the pr_usrreq
/pr_ctloutput
. Note that
pr_ctloutput
should be replaced by pr_getopt
/pr_setopt
:
PRU_CONTROL
->pr_ioctl
PRU_PURGEIF
->pr_purgeif
PRC0_GETOPT
->pr_getopt
PRC0_GETOPT
->pr_setopt
It's expected that pr_drain
will just schedule a
kernel continuation such as:
pr_init
->int pr_init(void *dsc);
int pr_fini(void *dsc)
This project proposal is a subtask of smp networking.
The goal of this project is to make the SYN cache optional. For small systems, this is complete overkill and should be made optional.
This project proposal is a subtask of smp networking.
The goal of this project is to remove the ARP, AARP, ISO SNPA, and IPv6
Neighbors from the routing table. Instead, the ifnet
structure should
have a set of nexthop caches (usually implemented using
patricia trees), one per address family.
Each nexthop entry should contain the datalink header needed to reach the
neighbor.
This will remove cloneable routes from the routing table and remove the need to maintain protocol-specific code in the common Ethernet, FDDI, PPP, etc. code and put it back where it belongs, in the protocol itself.
This project proposal is a subtask of smp networking.
The goal of this project is to improve the way the processing of incoming packets is handled.
Instead of having a set of active workqueue lwps waiting to service sockets, the kernel should use the lwp that is blocked on the socket to service the workitem. It is not productive being blocked and it has an interest in getting that workitem done, and maybe we can directly copy that data to user's address and avoid queuing in the socket at all.
This project proposal is a subtask of smp networking.
The goal of this project is to implement continuations at the kernel level. Most of the pieces are already available in the kernel, so this can be reworded as: combine callouts, softints, and workqueues into a single framework. Continuations are meant to be cheap; very cheap.
These continuations are a dispatching system for making callbacks at scheduled times or in different thread/interrupt contexts. They aren't "continuations" in the usual sense such as you might find in Scheme code.
Please note that the main goal of this project is to simplify the implementation of SMP networking, so care must be taken in the design of the interface to support all the features required for this other project.
The proposed interface looks like the following. This interface is mostly
derived from the callout(9)
API and is a superset of the softint(9) API.
The most significant change is that workqueue items are not tied to a
specific kernel thread.
kcont_t *kcont_create(kcont_wq_t *wq, kmutex_t *lock, void (*func)(void *, kcont_t *), void *arg, int flags);
A
wq
must be supplied. It may be one returned bykcont_workqueue_acquire
or a predefined workqueue such as (sorted from highest priority to lowest):wq_softserial
,wq_softnet
,wq_softbio
,wq_softclock
wq_prihigh
,wq_primedhigh
,wq_primedlow
,wq_prilow
lock
, if non-NULL, should be locked before callingfunc(arg)
and released afterwards. However, if the lock is released and/or destroyed before the called function returns, then, before returning,kcont_set_mutex
must be called with either a new mutex to be released orNULL
. If acquiring lock would block, other pending kernel continuations which depend on other locks may be dispatched in the meantime. However, all continuations sharing the same set of{ wq, lock, [ci] }
need to be processed in the order they were scheduled.flags
must be 0. This field is just provided for extensibility.int kcont_schedule(kcont_t *kc, struct cpu_info *ci, int nticks);
If the continuation is marked as INVOKING, an error of
EBUSY
should be returned. Ifnticks
is 0, the continuation is marked as INVOKING while EXPIRED and PENDING are cleared, and the continuation is scheduled to be invoked without delay. Otherwise, the continuation is marked as PENDING while EXPIRED status is cleared, and the timer reset tonticks
. Once the timer expires, the continuation is marked as EXPIRED and INVOKING, and the PENDING status is cleared. Ifci
is non-NULL, the continuation is invoked on the specified CPU if the continuations's workqueue has per-cpu queues. If that workqueue does not provide per-cpu queues, an error ofENOENT
is returned. Otherwise whenci
isNULL
, the continuation is invoked on either the current CPU or the next available CPU depending on whether the continuation's workqueue has per-cpu queues or not, respectively.void kcont_destroy(kcont_t *kc);
kmutex_t *kcont_getmutex(kcont_t *kc);
Returns the lock currently associated with the continuation
kc
.void kcont_setarg(kcont_t *kc, void *arg);
Updates
arg
in the continuationkc
. If no lock is associated with the continuation, thenarg
may be changed at any time; however, if the continuation is being invoked, it may not pick up the change. Otherwise,kcont_setarg
must only be called when the associated lock is locked.kmutex_t *kcont_setmutex(kcont_t *kc, kmutex_t *lock);
Updates the lock associated with the continuation
kc
and returns the previous lock. If no lock is currently associated with the continuation, then calling this function with a lock other than NULL will trigger an assertion failure. Otherwise,kcont_setmutex
must be called only when the existing lock (which will be replaced) is locked. Ifkcont_setmutex
is called as a result of the invokation of func, then after kcont_setmutex has been called but before func returns, the replaced lock must have been released, and the replacement lock, if non-NULL, must be locked upon return.void kcont_setfunc(kcont_t *kc, void (*func)(void *), void *arg);
Updates
func
andarg
in the continuationkc
. If no lock is associated with the continuation, then only arg may be changed. Otherwise,kcont_setfunc
must be called only when the associated lock is locked.bool kcont_stop(kcont_t *kc);
The
kcont_stop function
stops the timer associated the continuation handle kc. The PENDING and EXPIRED status for the continuation handle is cleared. It is safe to callkcont_stop
on a continuation handle that is not pending, so long as it is initialized.kcont_stop
will return a non-zero value if the continuation was EXPIRED.bool kcont_pending(kcont_t *kc);
The
kcont_pending
function tests the PENDING status of the continuation handlekc
. A PENDING continuation is one who's timer has been started and has not expired. Note that it is possible for a continuation's timer to have expired without being invoked if the continuation's lock could not be acquired or there are higher priority threads preventing its invokation. Note that it is only safe to test PENDING status when holding the continuation's lock.bool kcont_expired(kcont_t *kc);
Tests to see if the continuation's function has been invoked since the last
kcont_schedule
.bool kcont_active(kcont_t *kc);
bool kcont_invoking(kcont_t *kc);
Tests the INVOKING status of the handle
kc
. This flag is set just before a continuation's function is being called. Since the scheduling of the worker threads may induce delays, other pending higher-priority code may run before the continuation function is allowed to run. This may create a race condition if this higher-priority code deallocates storage containing one or more continuation structures whose continuation functions are about to be run. In such cases, one technique to prevent references to deallocated storage would be to test whether any continuation functions are in the INVOKING state usingkcont_invoking
, and if so, to mark the data structure and defer storage deallocation until the continuation function is allowed to run. For this handshake protocol to work, the continuation function will have to use thekcont_ack
function to clear this flag.bool kcont_ack(kcont_t *kc);
Clears the INVOKING state in the continuation handle
kc
. This is used in situations where it is necessary to protect against the race condition described underkcont_invoking
.kcont_wq_t *kcont_workqueue_acquire(pri_t pri, int flags);
Returns a workqueue that matches the specified criteria. Thus if multiple requesters ask for the same criteria, they are all returned the same workqueue.
pri
specifies the priority at which the kernel thread which empties the workqueue should run.If
flags
is 0 then the standard operation is required. However, the following flag(s) may be bitwise ORed together:WQ_PERCPU
specifies that the workqueue should have a separate queue for each CPU, thus allowing continuations to be invoked on specific CPUs.
int kcont_workqueue_release(kcont_wq_t *wq);
Releases an acquired workqueue. On the last release, the workqueue's resources are freed and the workqueue is destroyed.
This project proposal is a subtask of smp networking.
The goal of this project is to implement interrupt handling at the
granularity of a networking interface. When a network device gets an
interrupt, it could call <iftype>_defer(ifp)
to schedule a kernel
continuation (see kernel continuations) for that interface which could
then invoke <iftype>_poll
. Whether the interrupted source should be
masked depends on if the device is a DMA device or a PIO device. This
routine should then call (*ifp->if_poll)(ifp)
to deal with the
interrupt's servicing.
During servicing, any received packets should be passed up via
(*ifp->if_input)(ifp, m)
which would be responsible for ALTQ or any other
optional processing as well as protocol dispatch. Protocol dispatch in
<iftype>_input
decodes the datalink headers, if needed, via a table
lookup and call the matching protocol's pr_input
to process the packet.
As such, interrupt queues (e.g. ipintrq
) would no longer be needed. Any
transmitted packets can be processed as can MII events. Either true or
false should be returned by if_poll
depending on whether another
invokation of <iftype>_poll
for this interface should be immediately
scheduled or not, respectively.
Memory allocation has to be prohibited in the interrupt routines. The
device's if_poll
routine should pre-allocate enough mbufs to do any
required buffering. For devices doing DMA, the buffers are placed into
receive descripors to be filled via DMA.
For devices doing PIO, pre-allocated mbufs are enqueued onto the softc of
the device so when the interrupt routine needs one it simply dequeues one,
fills in it in, and then enqueues it onto a completed queue, finally calls
<iftype>_defer
. If the number of pre-allocated mbufs drops below a
threshold, the driver may decide to increase the number of mbufs that
if_poll
pre-allocates. If there are no mbufs left to receive the packet,
the packets is dropped and the number of mbufs for if_poll
to
pre-allocate should be increased.
When interrupts are unmasked depends on a few things. If the device is interrupting "too often", it might make sense for the device's interrupts to remain masked and just schedule the device's continuation for the next clock tick. This assumes the system has a high enough value set for HZ.
This project proposal is a subtask of smp networking.
The goal of this project is to enhance the networking protocols to process incoming packets more efficiently. The basic idea is the following: when a packet is received and it is destined for a socket, simply place the packet in the socket's receive PCQ (see atomic pcq) and wake the blocking socket. Then, the protocol is able to process the next packet.
The typical packet flow from ip_input
is to {rip,tcp,udp}_input
which:
- Does the lookup to locate the socket which takes a reader lock on the appropriate pcbtable's hash bucket.
- If found and in the proper state:
- Do not lock the socket since that would might block and therefore stop packet demultiplexing.
pcq_put
the packet to the pcb's pcq.kcont_schedule
the worker continuation with small delay (~100ms). See kernel continuations.- Lock the socket's
cvmutex
. - Release the pcbtable lock.
- If TCP and in sequence, then if we need to send an immediate ACK:
- Try to lock the socket.
- If successful, send an ACK.
- Set a flag to process the PCQ.
cv_signal
the socket's cv.- Release the cvmutex.
- If not found or not in the proper state:
- Release the pcb hash table lock.
This project proposal is a subtask of smp networking.
The goal of this project is to implement lockless, atomic and generic Radix and Patricia trees. BSD systems have always used a radix tree for their routing tables. However, the radix tree implementation is showing its age. Its lack of flexibility (it is only suitable for use in a routing table) and overhead of use (requires memory allocation/deallocation for insertions and removals) make replacing it with something better tuned to today's processors a necessity.
Since a radix tree branches on bit differences, finding these bit differences efficiently is crucial to the speed of tree operations. This is most quickly done by XORing the key and the tree node's value together and then counting the number of leading zeroes in the result of the XOR. Many processors today (ARM, PowerPC) have instructions that can count the number of leading zeroes in a 32 bit word (and even a 64 bit word). Even those that do not can use a simple constant time routine to count them:
int
clz(unsigned int bits)
{
int zeroes = 0;
if (bits == 0)
return 32;
if (bits & 0xffff0000) bits &= 0xffff0000; else zeroes += 16;
if (bits & 0xff00ff00) bits &= 0xff00ff00; else zeroes += 8;
if (bits & 0xf0f0f0f0) bits &= 0xf0f0f0f0; else zeroes += 4;
if (bits & 0xcccccccc) bits &= 0xcccccccc; else zeroes += 2;
if (bits & 0xaaaaaaaa) bits &= 0xaaaaaaaa; else zeroes += 1;
return zeroes;
}
The existing BSD radix tree implementation does not use this method but instead uses a far more expensive method of comparision. Adapting the existing implementation to do the above is actually more expensive than writing a new implementation.
The primary requirements for the new radix tree are:
Be self-contained. It cannot require additional memory other than what is used in its data structures.
Be generic. A radix tree has uses outside networking.
To make the radix tree flexible, all knowledge of how keys are represented
has to be encapsulated into a pt_tree_ops_t
structure with these
functions:
bool ptto_matchnode(const void *foo, const void *bar, pt_bitoff_t max_bitoff, pt_bitoff_t *bitoffp, pt_slot_t *slotp);
Returns true if both
foo
andbar
objects have the identical string of bits starting at*bitoffp
and ending beforemax_bitoff
. In addition to returning true,*bitoffp
should be set to the smaller ofmax_bitoff
or the length, in bits, of the compared bit strings. Any bits before*bitoffp
are to be ignored. If the string of bits are not identical,*bitoffp
is set to the where the bit difference occured,*slotp
is the value of that bit infoo
, and false is returned. Thefoo
andbar
(if notNULL
) arguments are pointers to a key member inside a tree object. If bar isNULL
, then assume it points to a key consisting of entirely of zero bits.bool ptto_matchkey(const void *key, const void *node_key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);
Returns true if both
key
andnode_key
objects have identical strings ofbitlen
bits starting atbitoff
. Thekey
argument is the same key argument supplied toptree_find_filtered_node
.pt_slot_t ptto_testnode(const void *node_key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);
Returns
bitlen
bits starting atbitoff
fromnode_key
. Thenode_key
argument is a pointer to the key members inside a tree object.pt_slot_t ptto_testkey(const void *key, pt_bitoff_t bitoff, pt_bitlen_t bitlen);
Returns
bitlen
bits starting atbitoff
from key. Thekey
argument is the same key argument supplied toptree_find_filtered_node
.
All bit offsets are relative to the most significant bit of the key,
The ptree programming interface should contains these routines:
void ptree_init(pt_tree_t *pt, const pt_tree_ops_t *ops, size_t ptnode_offset, size_t key_offset);
Initializes a ptree. If
pt
points at an existing ptree, all knowledge of that ptree is lost. Thept
argument is a pointer to thept_tree_t
to be initialized. Theops
argument is a pointer to thept_tree_ops_t
used by the ptree. This has four members: Theptnode_offset
argument contains the offset from the beginning of an item to itspt_node_t
member. Thekey_offset
argument contains the offset from the beginning of an item to its key data. This is used if 0 is used, a pointer to the beginning of the item will be generated.void *ptree_find_filtered_node(pt_tree_t *pt, const void *key, pt_filter_t filter, void *filter_ctx);
The filter argument is either
NULL
or a functionbool (*)(const void *, void *, int);
bool ptree_insert_mask_node(pt_tree_t *pt, void *item, pt_bitlen_t masklen);
bool ptree_insert_node(pt_tree_t *pt, void *item);
void *ptree_iterate(pt_tree_t *pt, const void *node, pt_direction_t direction);
void ptree_remove_node(pt_tree_t *pt, const pt_tree_ops_t *ops, void *item);
IMPORTANT: This project was completed by Matt Thomas. You may still contact the people above for details, but please do not submit an application for this project.
This project proposal is a subtask of smp networking and is elegible for funding independently.
The goal of this project is to implement lockess and atomic producer/consumer queues (PCQs) in the kernel. A PCQ allows multiple writers (producers) but only a single reader (consumer). Compare-And-Store operations are used to allow lockless updates. The consumer is expected to be protected by a mutex that covers the structure that the PCQ is embedded into (e.g. socket lock, ifnet hwlock). These queues operate in a First-In, First-Out (FIFO) manner. The act of inserting or removing an item from a PCQ does not modify the item in any way. A PCQ does not prevent an item being inserted multiple times into a single PCQ.
Since this structure is not specific to networking it has to be accessed
via <sys/pcq.h>
and the code has to live in kern/subr_pcq.c
.
The proposed interface looks like this:
bool pcq_put(pcq_t *pcq, void *item);
Places item at the end of the queue. If there is no room in the queue for the item, false is returned; otherwise true is returned. The item must not have the value
NULL
.void *pcq_peek(pcq_t *pcq);
Returns the next item to be consumed from the queue but does not remove it from the queue. If the queue is empty,
NULL
is returned.void *pcq_get(pcq_t *pcq);
Removes the next item to be consumed from the queue and returns it. If the queue is empty,
NULL
is returned.size_t pcq_maxitems(pcq_t *pcq);
Returns the maximum number of items that the queue can store at any one time.
pcq_t *pcq_create(size_t maxlen, km_flags_t kmflags);
void pcq_destroy(pcq_t *pcq);
This project proposal is a subtask of smp networking.
The goal of this project is to implement lockless and atomic FIFO/LIFO queues in the kernel. The routines to be implemented allow for commonly typed items to be locklessly inserted at either the head or tail of a queue for either last-in, first-out (LIFO) or first-in, first-out (FIFO) behavior, respectively. However, a queue is not instrinsicly LIFO or FIFO. Its behavior is determined solely by which method each item was pushed onto the queue.
It is only possible for an item to removed from the head of queue. This removal is also performed in a lockless manner.
All items in the queue must share a atomic_queue_link_t
member at the
same offset from the beginning of item. This offset is passed to
atomic_qinit
.
The proposed interface looks like this:
void atomic_qinit(atomic_queue_t *q, size_t offset);
Initializes the atomic_queue_t queue at
q
.offset
is the offset to theatomic_queue_link_t
inside the data structure where the pointer to the next item in this queue will be placed. It should be obtained usingoffsetof
.void *atomic_qpeek(atomic_queue_t *q);
Returns a pointer to the item at the head of the supplied queue
q
. If there was no item because the queue was empty,NULL
is returned. No item is removed from the queue. Given this is an unlocked operation, it should only be used as a hint as whether the queue is empty or not.void *atomic_qpop(atomic_queue_t *q);
Removes the item (if present) at the head of the supplied queue
q
and returns a pointer to it. If there was no item to remove because the queue was empty,NULL
is returned. Because this routine uses atomic Compare-And-Store operations, the returned item should stay accessible for some indeterminate time so that other interrupted or concurrent callers to this function with thisq
can continue to deference it without trapping.void atomic_qpush_fifo(atomic_queue_t *q, void *item);
Places
item
at the tail of theatomic_queue_t
queue atq
.void atomic_qpush_lifo(atomic_queue_t *q, void *item);
Places
item
at the head of theatomic_queue_t
queue atq
.
Setting up a secure SMTP server with AUTH and TLS enabled in Sendmail
While postfix is the basesystem's SMTP server, it is still possible to use the venerable Sendmail as your mail server of choice. Securing a sendmail SMTP gateway in order to use it from anywhere using your system's credentials is an easy task, here is how to achieve it.
Enabling Sendmail as the system's SMTP server
First thing is to disable postfix as the system's SMTP server. This action is controlled by the postfix parameter in /etc/rc.conf:
postfix=NO
We will then Install sendmail from pkgsrc with SASL for the authentication mechanism and TLS as the secure transport layer:
$ grep sendmail /etc/mk.conf
PKG_OPTIONS.sendmail= tls sasl
ACCEPTABLE_LICENSES+= sendmail-license
AUTH with SASL
Enabling SASL will build security/cyrus-sasl, but this package build failed with the following on my NetBSD 5.0.2 box:
db_ndbm.c:95: warning: passing argument 3 of 'utils->getcallback' from incompatible pointer type
So we will specify that cyrus-sasl should use berkeley as its database type:
$ grep SASL /home/bulk/etc/mk.conf
SASL_DBTYPE= berkeley
We can now install sendmail with TLS and SASL support the classic way:
$ cd /usr/pkgsrc/mail/sendmail && sudo make install clean
cyrus-sasl package does now include any authentication plugin, it's up to us to pick one that will suit our needs. As we want to authenticate over system's login/password, we will use cy2-login:
$ cd /usr/pkgsrc/security/cy2-login && sudo make install
In order to use this method, we will have to install the saslauthd package. Saslauthd is in charge of plaintext authentications on behalf of the SASL library.
$ cd /usr/pkgsrc/security/cyrus-saslauthd && sudo make install clean
Of course, we want this daemon to start at every boot of this mail server:
# cp /usr/pkg/share/examples/rc.d/saslauthd /etc/rc.d
# echo "saslauthd=YES" >> /etc/rc.conf
# /etc/rc.d/saslauthd start
Now we have to inform the SASL library that it should use saslauthd whenever sendmail asks for an authentication:
# echo "pwcheck_method:saslauthd" > /usr/pkg/lib/sasl2/Sendmail.conf
Setting up the secure transport layer
As everything is in place for authentication, we will now prepare the TLS prerequisites. Instead of generating a self-signed certificate, I use to rely on CACert, "a community driven, Certificate Authority that issues certificates to the public at large for free." (from CACert.org).
In order to generate the certificate signing request (CSR), you can use the CSRGenerator script from CACert, which is really handy.
Once you have generated your server's private key with CSRGenerator and received your server certificate from CACert, simply copy them to /etc/mail/certs, along with CACert root certificate. Make sure your private key has strict permissions, sendmail will refuse to start if it is readable by everyone.
Configuring sendmail
It is now time to write our sendmail configuration. Create a mc file corresponding to your needs in /usr/pkg/share/sendmail/cf, for example:
# cat > /usr/pkg/share/sendmail/cf/korriban.mc << EOF
divert(0)dnl
VERSIONID(`Mustafar')
OSTYPE(bsd4.4)dnl
DOMAIN(generic)dnl
FEATURE(access_db, `hash -T<TMPF> /etc/mail/access')
FEATURE(blacklist_recipients)
FEATURE(mailertable, `hash -o /etc/mail/mailertable')
FEATURE(virtusertable, `hash -o /etc/mail/virtusertable')
FEATURE(genericstable, `hash -o /etc/mail/genericstable')
FEATURE(local_procmail)
dnl ### I use procmail as my MDA
define(`PROCMAIL_MAILER_PATH',`/usr/pkg/bin/procmail')
dnl ### and dspam as my antispam
define(`LOCAL_MAILER_PATH', `/usr/pkg/bin/dspam')
define(`LOCAL_MAILER_ARGS', `dspam -t -Y -a $h "--deliver=innocent" --user $u -d %u')
define(`confMAX_MESSAGE_SIZE', 5000000)
dnl ### here begins the secure SMTP gateway parameters
dnl ###
dnl ### enable SMTP AUTH with LOGIN mechanism
define(`confAUTH_MECHANISMS', `LOGIN')dnl
TRUST_AUTH_MECH(`LOGIN')dnl
dnl ### enable STARTTLS
define(`confCACERT_PATH',`/etc/mail/certs/')dnl
define(`confCACERT', `/etc/mail/certs/cacert.crt')
define(`confSERVER_CERT',`/etc/mail/certs/korriban_server.pem')dnl
define(`confSERVER_KEY',`/etc/mail/certs/korriban_privatekey.pem')dnl
dnl ### end of secure SMTP gateway parameters
MAILER(local)dnl
MAILER(smtp)dnl
MAILER(procmail)
EOF
Once your configuration is ready, build and install it using the following:
# make install-cf CF=korriban
rm -f korriban.cf
m4 ../m4/cf.m4 korriban.mc > korriban.cf || ( rm -f korriban.cf && exit 1 )
echo "### korriban.mc ###" >>korriban.cf
sed -e 's/^/# /' korriban.mc >>korriban.cf
chmod 444 korriban.cf
/usr/bin/install -c -o root -g wheel -m 0444 korriban.cf /etc/mail/sendmail.cf
/usr/bin/install -c -o root -g wheel -m 0444 korriban.cf /etc/mail/submit.cf
Now that sendmail is configured, fire it up by invoking:
# /etc/rc.d/sendmail start
And test that the features we've added are working:
# sendmail -d0.1 -bv root | grep SASL
SASLv2 SCANF SOCKETMAP STARTTLS TCPWRAPPERS USERDB XDEBUG
$ telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 korriban.imil.net ESMTP Sendmail 8.14.5/8.14.5; Sat, 12 Nov 2011 16:43:40 +0100 (CET)
ehlo localhost
250-korriban.imil.net Hello localhost [127.0.0.1], pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-EXPN
250-VERB
250-8BITMIME
250-SIZE 5000000
250-DSN
250-ETRN
250-AUTH LOGIN
250-STARTTLS
250-DELIVERBY
250 HELP
There you go! now configure your MUA so it always tries TLS for sending mail, using the LOGIN authentication method.
Speeding up pkgsrc builds with ccache and distcc
Building an important amount of packages with pkgsrc can take a very long time. Two helper softwares can speed up operations significantly: ccache and distcc.
ccache
From package's DESCR:
ccache is a compiler cache. It acts as a caching pre-processor to C/C++ compilers, using the -E compiler switch and a hash to detect when a compilation can be satisfied from cache. This often results in a 5 to 10 times speedup in common compilations.
Using ccache in pkgsrc is very simple, just add the following line to your /etc/mk.conf:
PKGSRC_COMPILER= ccache gcc
Declaring ccache as a compiler in mk.conf will make it a dependency for every package to be built.
distcc
From package's DESCR:
distcc is a program to distribute compilation of C or C++ code across several machines on a network. distcc should always generate the same results as a local compile, is simple to install and use, and is often two or more times faster than a local compile.
We will setup distcc with two hosts called hostA and hostB. First, install the software on both machines:
# cd /usr/pkgsrc/devel/distcc && make install clean
# cp /usr/pkg/share/examples/rc.d/distccd /etc/rc.d
Configure some parameters in order to allow hostA and hostB to use each other's distcc instances. hostA's IP address is 192.168.1.1, hostB's IP address is 192.168.1.2:
hostA$ grep distcc /etc/rc.conf
distccd=YES
distccd_flags="--allow 192.168.1.0/24 --allow 127.0.0.1 --listen 192.168.1.1 --log-file=/home/distcc/distccd.log"
hostB$ grep distcc /etc/rc.conf
distccd=YES
distccd_flags="--allow 192.168.1.0/24 --allow 127.0.0.1 --listen 192.168.1.2 --log-file=/home/distcc/distccd.log"
Instead of sending logs to syslog, we will use a custom logfile located in distcc's user home directory:
# mkdir /home/distcc && chown distcc /home/distcc
We can then fire up distcc on both hosts:
# /etc/rc.d/distccd start
In order to use hostnames instead of their IP addresses, add them to both /etc/hosts:
# tail -2 /etc/hosts
192.168.1.1 hostA
192.168.1.2 hostB
And finally tell pkgsrc to use distcc along with ccache by adding these lines to /etc/mk.conf on both machines:
PKGSRC_COMPILER= ccache distcc gcc
DISTCC_HOSTS= hostA hostB
MAKE_JOBS= 4
Here we define MAKE_JOBS to 4 because we are using two single-CPU hosts. The recommended value for MAKE_JOBS is number of CPUs*2 to avoid idle time.
Testing
To see distcc in action, simply watch the /home/distcc/distccd.log file while you are building a package:
$ tail -f /home/distcc/distccd.log
distccd[5218] (dcc_job_summary) client: 192.168.1.1:64865 COMPILE_OK exit:0 sig:0 core:0 ret:0 time:175ms gcc lockfile.c
distccd[8292] (dcc_job_summary) client: 192.168.1.1:64864 COMPILE_OK exit:0 sig:0 core:0 ret:0 time:222ms gcc counters.c
distccd[27779] (dcc_job_summary) client: 192.168.1.1:64881 COMPILE_OK exit:0 sig:0 core:0 ret:0 time:3009ms gcc ccache.c
distccd[27779] (dcc_job_summary) client: 192.168.1.1:64863 COMPILE_OK exit:0 sig:0 core:0 ret:0 time:152ms gcc compopt.c
The file mk.conf
is the central configuration file for everything that has to do with building software. It is used by the BSD-style Makefiles
in /usr/share/mk
and especially by pkgsrc. Usually, it is found in the /etc
directory. If it doesn't exist there, feel free to create it.
Because all configuration takes place in a single file, there are some variables so the user can choose different configurations based on whether he is building the base system or packages from pkgsrc. These variables are:
BSD_PKG_MK
: Defined when a pkgsrc package is built.BUILDING_HTDOCS
: Defined when the NetBSD web site is built.- None of the above: When the base system is built. The file /usr/share/mk/bsd.README is a good place to start in this case.
A typical mk.conf
file would look like this:
# This is /etc/mk.conf
#
.if defined(BSD_PKG_MK) || defined(BUILDING_HTDOCS)
# The following lines apply to both pkgsrc and htdocs.
#...
LOCALBASE= /usr/pkg
#...
.else
# The following lines apply to the base system.
WARNS= 4
.endif
Document status: DRAFT
In this article I will document how to transform a Solaris 10 x86 core installation into a pkgsrc-powered desktop system. The Solaris core installation does not include any X11, GNOME or GNU utilites. We will use modular X.org from pkgsrc. The guide assumes that the reader has some prior experience using Solaris and pkgsrc.
Contents
Installation
Begin by installing a Solaris core system. When done, mount the Solaris CD/DVD and install the following extra packages:
- SUNWarc Lint Libraries (usr)
- SUNWbtool CCS tools bundled with SunOS (Solaris 9)
- SUNWbzip The bzip compression utility
- SUNWdoc Documentation Tools
- SUNWhea SunOS Header Files
- SUNWlibm Math & Microtasking Library Headers & Lint Files (Usr)
- SUNWlibmr Math Library Lint Files (Root) (Solaris 10)
- SUNWman On-Line Manual Pages
- SUNWscpr Source Compatibility, (Root)
- SUNWscpu Source Compatibility, (Usr)
- SUNWsprot Solaris Bundled tools
- SUNWtoo SUNWtoo Programming Tools
- SUNWxcu4 XCU4 Utilities
These packages are required if you intend to use modular-xorg-server from pkgsrc:
- SUNWdfbh Dumb Frame Buffer Header Files
- SUNWaudh Audio Header Files (don't ask why!)
# mount -F hsfs /dev/dsk/c1t1d0p0 /mnt
# cd /mnt/Solaris10/Product
# cp -r SUNW... /var/spool/pkg
# pkgadd
To see which SUNW packages are already installed, use the /usr/bin/pkginfo command.
Compiler setup
Now you need a compiler. You have a couple of options:
- Use my prebuilt compiler kit, available from http://notyet
- Install SUNWgcc from the Solaris DVD
- Install Sun Studio 10
- Install gcc from Sunfreeware.com
- [advanced] Bootstrap your own gcc, using one of the above. If you get an error about a library not being found, remember to use crle -u -l libpath to add it to the link path. Make sure any /usr/pkg/* library paths are included early in this string so that pkgsrc binaries will only have dependencies on pkgsrc libraries as much as possible.
pkgsrc
Got a compiler? Good! Let's download and bootstrap pkgsrc.
Grab pkgsrc.tar.gz from ftp://ftp.NetBSD.org/pub/pkgsrc/current/ and untar to /usr, or get it from CVS.
# cd /usr/pkgsrc/bootstrap
# env CFLAGS=-O2 CC=/usr/local/gcc4/bin/gcc ./bootstrap
[coffee break]
Now we can tune /usr/pkg/etc/mk.conf. I use the following additional settings:
CFLAGS+=-O2
CXXFLAGS+=-O2
CC=/usr/local/gcc4/bin/gcc
CXX=/usr/local/gcc4/bin/g++
X11_TYPE=modular
PKG_DEVELOPER=yes
PKG_DEFAULT_OPTIONS+=freetype truetype mmx subpixel official-mozilla-branding
At this point you're free to install whatever packages you like.
On Solaris 9 (at least), Python 2.4 is the latest version that will build. You may want to set PYTHON_VERSION_DEFAULT= 24 in mk.conf in order to build python packages. (As of 13feb2010.)
Installing modular X.org from pkgsrc
It is mentioned above, but easy to miss... you really want to set X11_TYPE=modular in mk.conf, otherwise none of this will work. You will also want to set MOTIF_TYPE=openmotif or MOTIF_TYPE=lesstif to avoid having pkgsrc/mk/motif.buildlink3.mk choose /usr/dt/... which requires X11_TYPE=native instead of modular.
Packages needed for modular X.org are:
- meta-pkgs/modular-xorg-fonts
- xxx: fonts/mkfontdir needs a hack that I have not yet committed
- meta-pkgs/modular-xorg-apps
- x11/modular-xorg-server
- xxx: needs some hacks that have not yet been committed (--disable-dri, libdrm, driproto KDSETMODE ioctl, vtname /dev/fb)
- x11/xf86-input-keyboard
- x11/xf86-input-mouse
- x11/xf86-video-vesa (or pick a suitable driver for your card)
- x11/xterm
Now run /usr/pkg/bin/Xorg -configure, which should work. Move the generated configuration file to /etc/X11/xorg.conf. Now you can attempt to start the server, by running Xorg with no arguments. If you get a picture, but the mouse isn't working, try to set your mouse device to "/dev/kdmouse" and the protocol to "PS/2" in xorg.conf.
TODO: write about installing firefox, desktop stuff, ...
Outstanding PR's with patches
There are some PR's with patches that solve Solaris build issues that are not yet committed. These may solve a problem you are having.
- pkg/40153 - Synopsis: pkgsrc/devel/binutils fails to build on solaris 10 sparc
- pkg/40201 - Synopsis: pkgsrc/sysutils/memconf update needed
- pkg/39085 - Synopsis: firefox3 compile problem (just committed!)
- pkg/40221 - Synopsis: pkgsrc/mail/p5-Mail-Mbox-MessageParser requires GNU grep (needed by grepmail)
- pkg/40222 - Synopsis: pkgsrc/databases/sqsh fix build w/sybase libs on Solaris
Other hints
These may not be the 'right' think to do, but are still a help to get past some issues until the right thing can be figured out:
errors building libffi (required by python 2.5+) using Sun Studio compiler can be worked around by using gcc or setting:
PYTHON_VERSION_DEFAULT=24
errors regarding a missing gtk-rebase can be worked around by installing the package textproc/gtk-doc
using a native jdk (on anything except SunOS-5.11-i386, which already works) can be done by adding these to /usr/pkg/etc/mk.conf:
PKG_JVM_DEFAULT= sun-jdk6 _PKG_JVMS_ACCEPTED+= sun-jdk6
errors regarding wrong number of arguments to readdir_r() and some other xxx_r() functions can be fixed by adding to the package Makefile:
CFLAGS.SunOS+= -D_POSIX_PTHREAD_SEMANTICS
If you encounter issues with missing libraries, refer to the pkgsrc guide first.
See also
External links
- NPC on Solaris developer HOWTO
- pkgsrc under Solaris at The Ö-Files - recommends using gcc (the GCC for Sun Systems package on Sparc (note that 4.2.0 is the last version supporting Solaris 9), bootstrapped lang/gcc34 on x86), 32-bit binaries, building everything from pkgsrc
I'm doing this guide on Tru64, but it should also apply to older versions of this fine OS, formerly branded Digital Unix and also as OSF1 (the system still identifies itself as OSF1).
Contents
Setting the environment
There is no bzip2 and cvs so we have to fetch the tarball by ftp. You can try to use ftp(1) or Netscape if you have DECWindows installed. You also can use the portable NetBSD ftp client included in the binary bootstrap if you decide to extract it first. I think tnftp is much nicer for downloading than anything else.
The system doesn't have gcc installed by default as some newer Unices have in the default install, but it has a nice compiler called ccc (Compaq C Compiler).
Extracting the files
You have to extract the downloaded sources:
# cd /usr
# gunzip -c /pkgsrc.tar.gz | tar xf -
Bootstrapping needs gcc:
# cd pkgsrc/bootstrap/
# env CC=/usr/local/gcc4/bin/gcc CFLAGS=-O2 ./bootstrap
Configuring pkgsrc
After the bootstrap is completed, you must decide which compiler to use. To keep using gcc, add the following to your mk.conf:
CC=/usr/local/gcc4/bin/gcc
CXX=/usr/local/gcc4/bin/g++
To use the native compiler, set PKGSRC_COMPILER=ccc in mk.conf. You will need at least Compaq C 6.4 (it supports VA_ARGS that tnftp(1) uses).
See also
This page should use cross references to avoid duplicate content. Please help us by cleaning it up. ?
Contents
See
Software requirements
Before you can use pkgsrc, you may need a few packages installed on your Linux system upfront.
- gcc (and libstdc++)
- libncurses-devel
- zlib and zlib-devel
- openssl-devel (optional but required for some packages)
- libudev-dev (optional but required for some packages)
- gawk
The names may vary, depending on what Linux distribution you are using. Also be mindful of the platform you are using (eg. i686 vs. x86_64 - some have different pre-required packages). Also note that some very basic tools such as file, patch, sed, and others are required, as well.
Troubleshooting bootstrap
Shell's echo command is not BSD-compatible
If you see this error
ERROR: Your shell's echo command is not BSD-compatible.
This error is known to occur if /bin/sh is linked to /bin/dash (recent Ubuntu versions).
The fix is to issue the following before commencing bootstrap: export SH=/bin/bash
ARG_MAX undeclared
If bootstrap stops at
In file included from glob.c:12:
__glob13.c: In function globextend:
__glob13.c:836: error: ARG_MAX undeclared (first use in this function)
Then apply this patch 1
FORTIFY_SOURCE
If bootstrap stops at
/usr/pkgsrc/bootstrap/work/bmake/arch.c: In function 'Arch_Touch':
/usr/pkgsrc/bootstrap/work/bmake/arch.c:1038: warning: ignoring return
value of 'fwrite', declared with attribute warn_unused_result
*** Error code 1
This error occurs because Linux uses -DFORTIFY_SOURCES by default. Bootstrap can be achieved by:
CFLAGS="-U_FORTIFY_SOURCE" ./bootstrap
libncurses not installed
If bootstrap stops at
ERROR: This package has set PKG_FAIL_REASON:
ERROR: No usable termcap library found on the system.
Then install the libncurses-dev package
On an RPM based system this might be via 'yum install ncurses-devel', and on a Debian/Ubuntu system 'apt-get install libncurses5-dev'
This page should use cross references to avoid duplicate content. Please help us by cleaning it up. ?
By default, IRIX is a quite hostile environment if one got used to systems where Bash, CVS and such are already installed. It also lacks many other tools (or at least sufficiently capable versions of them), so they all have to be built when bootstrapping pkgsrc.
Contents
Exploring the environment
$ echo $SHELL
/bin/ksh
$ cvs
ksh: cvs: not found
$ zsh
ksh: zsh: not found
$ bash
ksh: bash: not found
$ bzip2
ksh: bzip2: not found
$
So far, so bad. I will have to install all these tools via pkgsrc.
Getting pkgsrc
Since cvs is not available, I have to download the tarball from the FTP server.
$ ftp
ftp> open ftp.NetBSD.org
Connected to ftp.netbsd.org.
220 ftp.NetBSD.org FTP server (NetBSD-ftpd 20060923) ready.
Name (ftp.NetBSD.org:rillig): ftp
331 Guest login ok, type your name as password.
Password:
...
ftp> cd /pub/pkgsrc/current
250 CWD command successful.
ftp> ls
227 Entering Passive Mode (204,152,184,36,251,197)
150 Opening ASCII mode data connection for '/bin/ls'.
total 141322
drwxrwxr-x 52 srcmastr netbsd 1024 Jan 8 05:39 pkgsrc
-rw-rw-r-- 1 srcmastr netbsd 31658232 Jan 6 07:22 pkgsrc.tar.bz2
-rw-rw-r-- 1 srcmastr netbsd 56 Jan 6 07:22 pkgsrc.tar.bz2.MD5
-rw-rw-r-- 1 srcmastr netbsd 65 Jan 6 07:22 pkgsrc.tar.bz2.SHA1
-rw-rw-r-- 1 srcmastr netbsd 40628869 Jan 6 07:19 pkgsrc.tar.gz
-rw-rw-r-- 1 srcmastr netbsd 55 Jan 6 07:20 pkgsrc.tar.gz.MD5
-rw-rw-r-- 1 srcmastr netbsd 64 Jan 6 07:20 pkgsrc.tar.gz.SHA1
226 Transfer complete.
ftp> binary
200 Type set to I.
ftp> get pkgsrc.tar.gz
...
ftp> quit
221-
Data traffic for this session was 0 bytes in 0 files.
Total traffic for this session was 3445 bytes in 1 transfer.
221 Thank you for using the FTP service on ftp.NetBSD.org.
$
Extracting the files
$ mkdir proj
$ cd proj
$ gzcat ../pkgsrc.tar.gz | tar xf -
$ cd pkgsrc
$ CC=cc ./bootstrap/bootstrap --unprivileged --compiler=mipspro
... TODO: continue
Note: because nbsed cannot handle files with embedded '\0', and since GNU info files contain such characters, you should install textproc/gsed as soon as possible and then replace the TOOLS_PLATFORM.sed line in your mk.conf file.
See also
External links
- pkgsrc on IRIX @ WTFwiki
- Using NetBSD's pkgsrc on IRIX @ Nekochan
This page should use cross references to avoid duplicate content. Please help us by cleaning it up. ?
Contents
Synopsis
HP-UX is a version of Unix for HP's PA-RISC and Integrity line of servers and workstations. HP-UX 11.x versions are pretty well supported by pkgsrc and it's also quite usable on 10.20.
Preparations
pkgsrc
Simply download the pkgsrc snapshot tarball as you would do on other architectures. You can also use CVS if its avalible.
XXX TODO: explain in pkgsrc page and link to there.
Patches
Read Readme.HPUX for the required patches and prerequisites.
Compiler
You will need a compiler which can be HP's ANSI C/C++ compiler or GCC, which is availible from HP or other third parties.
Bootstrap
Bootstrapping is done the usual way.
CC=path_to CXX=path_to ./bootstrap --abi 32 --compiler gcc
XXX TODO: explain in pkgsrc page and link to there.
Audio
Audio playback works pretty well on Series 700 workstations through HP Audio and esound.
You will need to install the following depot beforehand:
B6865AAA -- HP Audio Clients for Series 700 - English
You can also use libao-esd with packages which support libao.
See also
- README.HPUX
- ?HP-UX TODO List
This page should use cross references to avoid duplicate content. Please help us by cleaning it up.
Contents
Introduction
'pkgsrc' on AIX must be a fairly uncommon occurance considering the general state of the documentation around getting it working. This is probably partly due to the advent of the 'AIX toolkit' which allows RPMs to be installed with minimum hassle on an AIX box. On the plus side, this toolkit also makes what appears to have been a fernickity bootstrap procedure pretty simple.
Due to limited resources I've only attempted this on AIX 5.2 and AIX 5.3 but both appear to work smoothly. Notes, from previous versions regarding AIX 4.1 have been left.
Setup the base system
For the majority of systems the following will be a non-issue as, more often than not, the entire base system will have been installed on the box and thus the 'bos.adt' packages will be present. To verify your system has those tools run the following command
TODO: installp <check for 'bos.adt'>
you'll notice the ABC output. If it's not there then you need to add those packages by sticking in the relevant CD (first install disk, generally) and running your favourite admin tool or the following command.
TODO: installp <install the package>
Precisely which set of packages is the minimum required set of 'bos.adt' packages to bootstrap is left as an exercise for the reader.
Install a compiler
As you'll probably realise, the one thing that 'pkgsrc' can't do without is a complier. The simplest option is to use the 'gcc' complier which (I think) is available on most versions of AIX (perhaps someone else could clarify these options). There's no particular reason that you can't or shouldn't use another complier but you might want to note that many of the packages within 'pkgsrc' will be dependent on GNU extensions and may cause you problems. Mixing compliers will probably wreak havoc unless you are extremely careful and have an excellent understanding of the dependancies and technologies involved.
Option 1: Using 'gcc' from the AIX toolkit
I believe that 'gcc' is available in AIX but if you don't have it you should be able to download it from IBM like I did. However I'd send a word of warning about having the complete 'AIX linux toolkit' installed because you will inevitably get caught trying to avoid conflicts with the 'pkgsrc' versions of libraries. Consequently, I'd recommend you only get the complier you want and remove everything else via the IBM supplied 'rpm' command.
The first step is to ensure that you have the 'rpm' manager
installp <show rpm>
and install it (in a similar manner to above) if you find it's missing or has been stripped out by an enthusiastic security officer.
Thus, if you follow the above advice, running the command 'rpm -qa' should produce something similar to the following
# rpm -qa
libgcc
gcc
gcc-c++
if you don't, you'll have a much longer list but the above items should be included in it.
P.S: i'm using 'gcc4' from the AIX toolkit, however, it would probably be more prudent to use 'gcc3' which is also available as 'pkgsrc' does not appear to be particularly 'gcc4' friendly.
P.P.S: in achieving the above I removed the 'mkisofs' and 'cdrecord' packages from the system. This suited me fine, however, you may wish to verify whether that's going to impact some of the base system's backup option (i.e. backup to DVD) and/or whether re-instatement of those utilities via 'pkgsrc' will solve those issues.
Option 2: Use a-n-other compiler
I cannot recommend or caution against this option; I'm simply not able to afford the IBM compiler (or other, if there is one). Should anyone wish to give me a license I'll be happy to try it. However, apart from the problems you'll no doubt have with the general adherence to GCC and it's extensions within the 'pkgsrc' packages, it should work.
That's a heavily conditioned should, for those that didn't get it the first time.
Bootstrap away, bootstrap away, bootstrap away
Generally, I like to mirror the native system layout (it's one of the primary reasons I like 'pkgsrc') and so generally I put 'pkgsrc' into '/opt' and use the following bootstrap options
- --prefix=/opt/pkg
- --pkgdbdir=/var/opt/pkg
whilst leaving the others with their default 'pkgsrc' setting.
Where you put it is, of course, entirely up to yourself but remember to set the 'CC' environment variable before bootstrap or you'll get into a bit of a mess when the build wrappers fail to find it later. This
# CC=/opt/freeware/bin/gcc ; export CC
is probably what you want but, if you're using another complier, you'll need to change it correspondingly.
Go bootstrap.
# cd /opt/pkgsrc/bootstrap
# ./bootstrap --prefix=/opt/pkg --pkgdbdir=/var/opt/pkg
[ ... ]
Complete success and happiness has been achieved
[ ... ]
Fulfillment of other life ambitions imminient
[ ... ]
Bootstrap successful
#
Hopefully, that's the worst of it over.
Pack-up and wagons roll
Now you need to complete your local configuration
- set library locations
- set path
- set build environment (i.e. mk.conf)
- set other variables
And ensure that you double check 'README.AIX' for important changes.
The last decision you have to make (for now at least) is whether to use one of the 'pkgsrc' compilers instead of the AIX linux toolkit you just used. Personally, I see little reason, particularly as the latest compiler in 'pkgsrc' is 'gcc3' and the AIX toolkit give me a shiny gcc4.2 version. N.B: as noted above building with 'gcc4' may not be as resilient as 'pkgsrc' seems to be more settled on 'gcc3' at present.
The only thing left is for someone to re-generate the binary bootstrap package for the other AIX souls out there so that the above article is completely useless.
Known issues
The following section outlines the various problems encountered whilst trying to get the system working, note (to self) these should only really appear here if they are specific to AIX. Find somewhere friendlier for more generic notes.
Packages
This is a very limited selection of the packages problems that have been encountered if anyone does a complete build, power to them, perhaps they'll add some info here.
sysutils/fam
It will just not compile - The situation is a bit like for OSF1 as described in PR #31489 for AIX as well. After trying to fix the first errors I decided to add FAM_DEFAULT=gamin in my mk.conf. I've posted a PR for setting this permanently on AIX: #41815
devel/gettext
See bug (todo: look this up)
devel/pcre
Just does not compile (the C++ part cannot be linked) - a bug report in the PCRE project was opened.
lang/perl5
The Configure and Makefile.SH scripts delivered by perl are not properly working when compiling with gcc. They hand over -b arguments to gcc without prefixing them -Wl, which needs to be done because they are meant for ld. I've raised PR #41814 with a patch file included to fix this. The fix was tested on a Power2 System running AIX 5.1 with gcc 3.4 from pkgsrc. OliverLehmann
security/openssh
Another package that thinks it needs 'perl'; OK it does but we don't want to build it so lets just hack out the tools again.
security/openssl
One of those packages that depends on 'perl' simply hack the Makefile to remove 'perl' from the required build tools. This should allow it to build using the base system 'perl'.
Also had trouble with the linking of the shared libraries, not sure if this points to my use of 'gcc4' but manually hacked 'Makefile.shared' (think this is generated so you may need to hack this after the error) to include '--shared' with the linking command. You'll find this in the LINK_SO definition; look for the SHAREDLINK command. -- [ttw](file:///User:Ttw)
P.S: remember to set the 'PERL5' variable above
security/gnupg2
This isn't actually a problem with the build of 'gnupg2' but it is a problem with the fact that it depends on a bizillion packages. I had issues with 'curl' and 'pinentry'. I should really log bugs for these, and I'll try but I need to get this @$%# completed and get work done too. Anyway, can't remember 'what the problem with 'curl' was but it must have been on of the standard ones here, probably my base perl. 'pinentry' posed more of a problem. Apparently there is a dependancy on 'getopt' missing from the build. I didn't actually fix this correctly, once I finally tracked it down i simply did the following
# ln -s /opt/pkg/include/getopt.h work/.buildlink/include/getopt.h
# for f in /opt/pkg/lib/libgetopt* ; do ln -s $f work/.buildlink/lib/`basename $f` ; done
Next we hit a block with the definition of FD_SETSIZE on AIX which is 65534 but the GNU portable threads that this package uses fixes a maximum of 1024 (although the change log appears to contradict this). Either way I hacked the 'Makefile' such that the static inclusion of the 'pth' stuff (under 'pre-configure' action) includes the '--with-fdsetsize=1024' option.
Current advice would be try 'security/gnupg' instead.
PS: Odd that the build still shows the '-L/opt/pkg/include' but cannot find it. Makes tracking down issues difficult. Need to investigate the 'pkgsrc' mechanics further to understand how/why this works. --ttw 23:45, 17 May 2009 (UTC)
lang/python25
This is a new one by me but the configure script for python explicitly overrides the 'CC' variable we defined when setting up pkgsrc. I've hacked this (once more, i know i should correct these things but i'm busy scratching other itches ... anyway) by adding a 'CONFIGURE_ARGS' to the 'Makefile' with the value '--with-gcc'. this conveniently avoids the evil configure code and allows the default 'CC' to be uses sanely.
NB: The above is only one issue. This doesn't work for me as is.
lang/python24
This is slightly less ugly than 'lang/python25' primarily because it doesn't appear dependent on GNU 'pth' but I didn't really track this down too hard either. Major grip is that IPv6 doesn't work due to a failure in the 'configure' script. That should be hackable but I've currently got it built with '--disable-ipv6', although unsatisfactory.
Further down the line there is a problem with the 'bsddb' package which has some threading issues that I've not investigated. I wasn't interested in it anyway so I simply disabled it (by hacking it out of the Makefile) and we have a build.
I'm not sure that counts as success but I'm ignoring it in favour of my own needs.
mail/dovecot
this is pretty nice all in all but I couldn't get this to build due to issues with 'rquota' in the 'quotas' plugins. the solution was to hack 'quota-fs.c' and change the local 'rquota.h' to the system version () and add the other necessaries (i.e. and ).
Unfortunately, there's a little more incompatability that needs hacking and that's more edits to 'quota-fs.c' where we hack the result.status' to the corresponding 'result.gqr_status' and the 'result.getquota_rslt_u.status' to 'result.gqr_rquota'.
I'm sure all this is "wrong" but it appears to build. Only thing required after that is to ensure you add the 'dovecot' user before doing the install. Happy days.
Operating system
As we all know, AIX isn't anymore singular than other systems and various problems arise on various versions, particularly as both systems are organic entities.
Feel free to add to that knowledge quotient here.
AIX 5L
'undef'd stdio functions within c++ library
the c++ standard libraries appear have a small problem wherein the 'cstdio' 'undef's the macros for 'fgetpos', 'fsetpos', 'fopen', 'freopen'. unfortunately, this is incorrect when using _LARGE_FILE extensions as these macros alias the 64 versions of the functions. if you alter your '/opt/freeware/lib/gcc/powerpc-ibm-aix5.3.0.0/4.2.0/include/c++/cstdio' file to correct this everything should start to flow smoothly again. the following is what i did
# cd /opt/freeware/lib/gcc/powerpc-ibm-aix5.3.0.0/4.2.0/include/c++
# ln cstdio cstdio,v1.1.1.1
# sed -e '/fgetpos/ d' -e '/fsetpos/ d' \
-e '/fopen/ d' -e '/freopen/ d' \
cstdio,v1.1.1.1 >cstdio,v1.2
then we add the following to the new file
#ifndef _LARGE_FILES
#undef fgetpos
#undef fsetpos
#undef fopen
#undef freopen
#endif /* _LARGE_FILES */
and replace the active one
# ln -f cstdio,v1.2 cstdio
we can now compile away happily again
'bos.net.nfs.adt' required for various builds
i would imagine that this issue will be transparent to most users and can probably be resolved by a quick search on google, however, i put it here for posterity and completeness and for those whom hacking at any level is alien. basically, you need 'rpcgen' for various builds and that's only included as part of the 'bos.net.nfs.adt' package from the base system. you'll probably have it already.
--ttw 15:18, 7 June 2009 (UTC): this may be incorrect, perhaps 'bos.net.nfs.server' package
AIX 4.1
AIX 4.1 is a pretty old system at this stage, however, AIX systems are renowned for their longevity, although I'd guess there are very few being used as development platforms these days.
Still if you can contribute to clarifying any of the following and/or find some issues of your own feel free to help shuffle the rest of the world round that blockade.
For older AIX releases like this, mirrors of the AIX Public Domain Software Library (aixpdslib) may prove useful to get started.
Conflicting type declarations
Following the instructions to bootstrap pkgsrc-2006Q3 did not work on my older 4.1 system. After several minutes the bootstrap process aborted with the following error:
sh makelist -bc ./vi.c ./emacs.c ./common.c > help.c
sh makelist -bh ./vi.c ./emacs.c ./common.c > help.h
gcc -g -O2 -I. -I./.. -I. -I.. -I./../libedit -I./../libnetbsd -c chared.c
In file included from sys.h:142,
from chared.c:37:
/usr/include/regex.h:172: conflicting types for `regex'
/usr/include/libgen.h:31: previous declaration of `regex'
/usr/include/regex.h:173: conflicting types for `regcmp'
/usr/include/libgen.h:30: previous declaration of `regcmp'
*** Error code 1
Stop.
bmake: stopped in /usr/pkgsrc/bootstrap/work/tnftp/libedit
*** Error code 1
Stop.
bmake: stopped in /usr/pkgsrc/bootstrap/work/tnftp
===> exited with status 1
aborted.
I found an explanation of this error with help from Google:
- Old Unix versions define these functions in libgen.h, newer in regex.h. It seems that AIX define in two places but with different prototypes.
Not having any skills in C programming, I was unable to resolve the issue by correcting the conflicting definitions and had to resort to sledgehammer tactics; I removed libgen.h but the bootstrap process then failed as it was not able to locate the file. I then overwrote the libgen.h file with a copy of regex.h. This sorted out the conflicting type declaration problem, but I am not sure if this will have any adverse effect on my system! The plan was just to get the bootstrap process to complete then re-instate the original libgen.h file.
You should never change your operating system's files just to make any third-party software run. The proper solution is to tell the author of tnftp (see pkgsrc/net/tnftp/Makefile.common) to have a look at it. --Rillig 13:11, 17 December 2006 (CET)
A workaround -- remove check for libgen.h from tnftp/configure. (Verified on AIX 4.3.2.0, pkgsrc-2007Q1.) Shattered 21:10, 9 July 2007 (CEST)
Undeclared variable
After restarting the bootstrap process, it failed again with the following error:
gcc -g -O2 -I. -I./.. -I. -I.. -I./../libedit -I./../libnetbsd -c inet_ntop.c
gcc -g -O2 -I. -I./.. -I. -I.. -I./../libedit -I./../libnetbsd -c inet_pton.c
inet_pton.c: In function `inet_pton4':
inet_pton.c:92: `uint32_t' undeclared (first use in this function)
inet_pton.c:92: (Each undeclared identifier is reported only once
inet_pton.c:92: for each function it appears in.)
inet_pton.c:92: parse error before `val'
inet_pton.c:108: `val' undeclared (first use in this function)
*** Error code 1
Stop.
bmake: stopped in /usr/pkgsrc/bootstrap/work/tnftp/libnetbsd
*** Error code 1
Stop.
bmake: stopped in /usr/pkgsrc/bootstrap/work/tnftp
===> exited with status 1
aborted.
This is as far as I have managed to get at the moment. I will update this page as and when I have a solution to this problem.
I think adding #include at line 25 of pkgsrc/net/tnftp/files/tnftp.h helps. --Rillig 14:03, 17 December 2006 (CET)
AIX 4.1.5 appears not to have inttypes.h. ? ChristTrekker 14:30, 22 April 2009 (UTC)
Which makes sense, since 4.1.5 predates C99. ? ChristTrekker 04:12, 29 April 2009 (UTC)
missing termcap library
Using AIX 4.1.5 and gcc 2.95.2, I get this far...
===> running: (cd /usr/pkgsrc/net/tnftp && /usr/pkgsrc/bootstrap/work/bin/bmake -DPKG_PRESERVE MAKECONF=/usr/pkgsrc/bootstrap/work/mk.conf install)
ERROR: This package has set PKG_FAIL_REASON:
ERROR: No usable termcap library found on the system.
*** Error code 1
Stop.
bmake: stopped in /usr/pkgsrc/net/tnftp
===> exited with status 1
aborted.
Updates will be posted here as progress is made. ? ChristTrekker 18:57, 17 March 2009 (UTC)
It appears that AIX has a libtermcap.a but doesn't provide termcap.h. ? ChristTrekker 16:00, 18 March 2009 (UTC)
working bootstrap
I don't remember what I did now, but I managed to get bootstrapped. The problem now is that very few packages build, because when trying to make libtool-base I get configure: error: C compiler cannot create executables, which is clearly a bogus problem since I've obviously created executables. I've submitted some patches with PRs, so I really need to wipe out my pkgsrc installation and try bootstrapping from scratch again to see if the process is smoother. ? ChristTrekker 21:24, 9 June 2009 (UTC)
I think fixing this is just a matter of specifying CC on the command line. ? ChristTrekker 18:09, 21 July 2009 (UTC)
See also
External links
This article is a stub. You can help by editing it.
This document specifies additional details helpful or necessary when /usr/pkgsrc is mounted via NFS. These are mostly related to /etc/mk.conf variables you can set.
Work files
The working files have to be placed in a writable directory, so if the mount is read-only, you need to add:
WRKOBJDIR=/usr/work
You also need to create that directory. Alternatively, if the mount is read/write, you just need to keep the work files from clobbering those from other platforms, so instead add:
OBJMACHINE=defined
You may want to set this in /usr/pkgsrc/mk/defaults/mk.conf if you intend on using it anywhere, so that it will be used everywhere and you don't shoot yourself in the foot accidentally.
Distribution files
The distribution files have to be placed in a writable directory, so if the mount is read-only, you need to add:
DISTDIR=/usr/distfiles
You also need to create that directory. Alternatively, you can mount the NFS partition read/write, which can be helpful if you build the same packages on multiple platforms.
Generated packages
If you generate binary packages, they have to be placed in a writable directory where they will not clobber those from other platforms, so add:
PACKAGES=/usr/pkg/packages
You may want to set this in /usr/pkgsrc/mk/defaults/mk.conf if you intend on using it anywhere, so that it will be used everywhere and you don't shoot yourself in the foot accidentally. (Is it possible to do something like /usr/pkg/packages/sys-ver/arch automatically?)
Contents
What is pkgsrc
Pkgsrc [spoken: package source] is the main package management framework for NetBSD. With pkgsrc you can easily add, remove and manage software on your system. Pkgsrc is basically a set of files, grouped by categories which contain information to install the software you have selected. All these files together are mostly referred to as the pkgsrc tree. This tree is maintained by the pkgsrc developers, who make changes to it every day. Therefore it is necessary to update the pkgsrc tree regularly.
Documentation
It is strongly advised to rely on information in The pkgsrc Guide This wiki is semi-official and is frequently outdated. Sometimes even misleading.
Preparing pkgsrc
Obtaining the current pkgsrc source tree
See The pkgsrc Guide
Creating WRKOBJDIR
To keep the tree clean and your work directories out of it, create a directory, e.g.
# mkdir /usr/work
and define WRKOBJDIR in /etc/mk.conf:
WRKOBJDIR=/usr/work
Creating DISTDIR
We also want our distfiles to be stored, outside of the pkgsrc directory. Therefore we add the DISTDIR variable to /etc/mk.conf
DISTDIR=/usr/distfiles
and create it with:
# mkdir /usr/distfiles
Installing packages
To install packages, we need to become root.
$ su
then we change to the directory (category) and then to the package we want to install.
# cd /usr/pkgsrc/misc/figlet
to install we enter
# make install
afterwards we clean up and enter
# make clean
if this was a package with dependencies, we also enter
# make clean-depends
You can put them all in one line too.
# make install clean clean-depends
If you wish to clean the distfiles, the files that have been downloaded, you enter
# make distclean
List Packages
$ pkg_info
Removing Packages
# pkg_delete packagename
Updating Packages
You can update a single package using make update.
# make update
On-line help
Besides The pkgsrc Guide there is also a built-in on-line help system.
# make help
gives you the usage information. This requires you to already know the name of the target or variable you want more info on (just like man does).
Most targets and variable names are documented, but not all are. See also
Some folks don't realize that NetBSD provides an easy way to configure many packages. To see if a particular package has such options, while in the /usr/pkgsrc// directory type
make show-options
As an example, we'll use uim, an input method for Japanese.
cd /usr/pgksrc/inputmethod/uim
make show-options
I see the following
Any of the following general options may be selected:
anthy Use Anthy as Japanese conversion program.
canna Use Canna as Japanese conversion program.
eb Enable EB dictionary library support.
gtk Enable support for GTK.
qt
These options are enabled by default: anthy canna gtk
These options are currently enabled: anthy canna gtk
If one only wants the default options, then a simple make install clean; make clean-depends will install them. However I don't want the defaults. I do want anthy and gtk however, I don't want canna and wish to add qt.
One can either do this at the command line or put the lines in /etc/mk.conf. Usually, the variable will be called PKG_OPTIONS.pkgname where pkgname is, oddly enough, the name of the package. Most packages that have options will have an options.mk file in their directory. A quick glance at that will show you the name of the variable. (If there is no options.mk file, the name of the variable can be found in the Makefile.) In this case the options.mk file has the line
PKG_OPTIONS_VAR= PKG_OPTIONS.uim
So, I will type
make PKG_OPTIONS.uim="qt -canna" install clean; make clean-depends.
This will install gtk, qt and anthy.
Most people will put these options in /etc/mk.conf so that they don't have to remember it each time. In this case, it's only a few options, but for something like /usr/pkgsrc/x11/xorg-server there are about 30 options, most of which you won't want. Typing make show-options in x11/xorg-server's directory gives me (shortened for the reader's sake)
These options are enabled by default: xorg-server-apm xorg-server-ark xorg-server-ati xorg-server-chips xorg-server-cirrus xorg-server-cyrix xorg-server-dummy xorg-server-glint xorg-server-i128 xorg-server-i740 xorg-server-i810 xorg-server-imstt xorg-server-mga xorg-server-neomagic xorg-server-newport xorg-server-nsc xorg-server-nv xorg-server-rendition xorg-server-s3 xorg-server-s3virge xorg-server-savage xorg-server-siliconmotion xorg-server-sis xorg-server-tdfx xorg-server-tga xorg-server-trident xorg-server-tseng xorg-server-vesa xorg-server-vga xorg-server-via xorg-server-vmware
I don't want to type make PKG_OPTIONS="-xorg-server-ati -xorg-server-cardb -xorg-server-cardc" and all the rest each time I reinstall it so I would definitely put it in /etc/mk.conf.
When adding PKG_OPTIONS.pkgname options to etc/mk.conf, don 't put quotes around them. In our example of uim, I would add this line to /etc/mk.conf
PKG_OPTIONS.uim=qt -canna
If I write
PKG_OPTIONS.uim="qt -canna"
in /etc/mk.conf, I will get an error message when I try to build the package. (Or possibly when I try to build any package.)
So, to sum up
If you want default options, don't do anything. If you want available options that aren't enabled by default add them to the PKG_OPTIONS variable, either quoted on the command line, eg PKG_OPTIONS.uim="qt" or in /etc/mk.conf. If you put them in mk.conf don't use quotes.
If you don't want an option enabled by default, use a - in front of it, either quoted on the command line or without quotes in /etc/mk.conf
There are various techniques for upgrading packages either by using pre-built binary package tarballs or by building new packages via pkgsrc build system. This wiki page hopefully will summarize all of the different ways this can be done, with examples and pointing you to further information.
Contents
Methods using only binary packages
pkgin
The recommended way to manage your system with binary packages is by using pkgtools/pkgin.
pkg_add pkgin
Then configure your binary repository from which you want to install packages in /usr/pkg/etc/pkgin/repositories.conf. Run 'pkgin update' to get the list of available packages. You can then install packages using 'pkgin install firefox'.
To update all installed packages, just run
pkgin update
pkgin upgrade
pkg_add -uu
pkg_add's -u option is used to update a package. Basically: it saves the package's current list of packages that depend on it (+REQUIRED_BY), installs the new package, and replaces that list of dependencies.
By using the -uu (option used twice), it will attempt to update prerequisite packages also.
See the manual page, pkg_add(1), for details.
pkg_chk -b
Use "-b -P URL" where URL is where the binary packages are (e.g. ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/i386/5.1/All/).
For example, to update any missing packages by using binary packages:
pkg_chk -b -P URL -u
Or to automatically add any missing packages using just binary packages:
pkg_chk -b -P URL -a -C pkg_chk.conf
If both -b and -P are given, no pkgsrc tree is used. If packages are on the local machine, they are scanned directly, otherwise the pkg_summary database is fetched. (Using pkg_summary for local packages is on the TODO list.)
(pkg_chk is also covered below.)
Methods that build packages from source
pkgsrc is developed and tested with a consistent source tree. While you can do partial updates, doing so crosses into undefined behavior. Thus, you should be up to date with respect to the repository, either on HEAD, a quarterly branch, or much less likely, a specific date on HEAD.
That said, it is often necessary to update a particular package to an older date to work around broken updates. This risks difficulty but can be reasonable for those who can cope.
Among the options below, the mainstream/popular options are pkg_rolling-replace, and sandbox/separate-computer, and pbulk, which have been listed in (roughly) order of both increasing setup difficulty and increasing reliability.
make update
Note that there has been little recent discussion of experiences with make update; this is a clue that few are using it, because it is extremely unlikely that people are using it without problems.
'make update', invoked in a pkgsrc package directory, will remove the package and all packages that depend on it, keeping a list of such packages. It will then attempt to rebuild and install the package and all the packages that were removed.
It is possible, and in the case of updating a package with hundreds of dependencies, arguably even likely that the process will fail at some point. One can fix problems and resume the update by typing make update in the original directory, but the system can have unusuable packages for a prolonged period of time. Thus, many people find 'make update' too dangerous, particularly for something like glib on a system using gnome.
To use binary packages if available with "make update", use "UPDATE_TARGET=bin-install". If package tarball is not available in ${PACKAGES} locally or at URLs (defined with BINPKG_SITES), it will build a package from source.
To enable manual rollback one can keep binary packages. One method is to always use 'make package', and to have "DEPENDS_TARGET=package" in /etc/mk.conf. Another is to use pkg_tarup to save packages before starting.
make replace
'make replace' builds a new package and substitutes it, without changing the packages that depend on it. This is good, because large numbers of packages are not removed in the hope they can be rebuilt. It is bad, because depending packages may find ABI breaks, because a shlib major version changed, or something else more subtle, such as changed behavior of a program invoked by another program. If there is an ABI change, the correct approach is to 'make replace' the depending package. The careful reader will note that this process can in theory require all packages that depend (recursively) on a replaced package to be replaced. See the pkg_rolling-replace section for a way to automate this process.
The "make replace" target preserves the existing +REQUIRED_BY file, uninstalls the currently installed package, installs the newly built package, reinstalls the +REQUIRED_BY file, and changes depending packages to reference the new package instead. It also marks such depending packages with the "unsafe_depends" build variable, set to YES, if the package version changes, and "unsafe_depends_strict" is set in all cases.
It can use the pkg_tarup tool to create a tarball package for the the currently installed package first, just in case there is a problem (\todo Check this; it doesn't seem to happen in 2024.)
advice to be reconsidered
\todo Explain this better and perhaps prune; gdt hasn't found any reason to set this, and not any related to make replace.
If you are an expert (and don't plan to share your packages publically), you can also use in your mk.conf:
USE_ABI_DEPENDS?=no
This is for ignoring the ABI dependency recommendations and just use the required DEPENDS.
Problems occuring during make replace
Besides ABI changes (for which pkg_rolling-replace is a good solution), make replace can fail if packages are named or split. A particularly tricky case is when package foo is installed, but in pkgsrc has been split into foo and foo-libs. In this case, make replace will try to build the new foo (while the old monolithic foo is installed). The foo package depends on foo-libs, and so pkgsrc will go to build and install foo-libs. This will fail because foo-libs will conflict with the old foo. There are multiple approaches:
- Use pkg_delete -f, and then make install. This loses dependency information. Run "pkg_admin rebuild-tree". Perhaps do make replace on depending packages.
- manually save the foo +REQUIRED_BY file, pkg_delete foo, and then make package of the new foo. Put back the +REQUIRED_BY, and pkg_admin set unsafe_depends=YES all packages in the +REQUIRED_BY.
- pkg_delete -r foo, and make package on everything you still want. Or do make update. Note that this could delete a lot.
- Automating the first option would be a useful contribution to pkgsrc.
In addition, any problem that can occur with building a package can occur with make replace. Usually, the solution is not make replace specific
pkg_chk
See all packages which need upgrading:
pkg_chk -u -q
Update packages from sources:
pkg_chk -u -s
You can set UPDATE_TARGET=package in /etc/mk.conf and specify the -b flag, so that the results of compilation work are saved for later use, and binary packages are used if they are not outdated or dependent on outdated packages.
The main problem with pkg_chk, is that it deinstalls all to-be-upgraded candidates before reinstalling then. However a failure is not entirely fatal, because the current state of packages is saved in a pkg_chk* file at the root of the pkgsrc directory. But, if new ones can't be built, it is still quite problematic.
pkg_rolling-replace
pkgtools/pkg_rolling-replace is a shell script available via pkgsrc. It makes a list of all packages that need updating, and sorts them in dependency order. Then, it invokes "make replace" on the first one, and repeats. A package needs updating if it is marked unsafe_depends or if it is marked rebuild (=YES). If pkg_rolling-replace is invoked with -u, a package needs updating if pkgtools/pkg_chk reports that the installed version differs from the source version. On error, pkg_rolling-replace exits. The user should remove all working directories and fix the reported problem. This can be tricky, but the same process that is appropriate for a make replace should be followed.
Because pkg_rolling-replace just invokes make replace, the problems of ABI changes with make replace apply to pkg_rolling-replace, and the system will be in a state which might be inconsistent while pkg_rolling-replace is executing. But, by the time pkg_rolling-replace has successfully finished, the system will be consistent because every package that has a depending package 'make replaced' out from under it will be marked unsafe_depends, and then replaced itself. This replace "rolls" up the dependency tree because pkg_rolling-replace sorts the packages by dependency and replaces the earliest needing-rebuild package first.
Also, some "make replace" operations might fail due to new packages having conflicts with old packages (newly split packages, moving files between packages, etc.). These need the same manual intervention.
See the pkg_rolling-replace man page (installed by the pkg) for further details. Note that it asks that problems with pkg_rolling-replace itself be separated from problems with make replace operations that pkg_rolling-replace chose to do (when the choice was reasonable), and specifically that underlying package build failures not be reported as pkg_rolling-replace problems.
Example
As an example of running pkg_rolling-replace and excluding packages marked not for deletion, perhaps for separate manual updating:
cd /var/db/pkg
find . -name "+PRESERVE" | awk -F/ '{print $2}'
Update everything except the packages above:
pkg_rolling-replace -rsuvX bmake,bootstrap-mk-files,pax,pkg_install
(Experience does not suggest that this is necessary; pkg_rolling-replace without these exclusions has not been reported to be problematic. And if so, the problem is almost certainly an underlying issue with the specific package.)
Real-world experience with pkg_rolling-replace
Even if a lot of packages need to be updated, make replace usually works very well if the interval from the last 'pkg_rolling-replace -u' run is not that long (a month or so). With a longer interval, like a year or two, the odds of package renaming/splitting are higher. Still, for those who can resolve the issues, this is a fairly low-pain and reliable way to update.
Delete everything
If you don't have a production environment or don't care if your packages will be missing for a while, you can just delete everything and reinstall.
This method is the easiest:
# pkg_delete -Rr '*-*'
-or-
# pkg_delete -ff '*-*'
This expands to all packages, and deletes all packages without caring about dependencies. The second version of the command should be faster, as it does not perform any dependency recursion. (The quotes around the wildcards are so it doesn't get expanded by the shell first.)
Here is one idea (from posting on pkgsrc-users):
Get a list of packages installed:
# pkg_info -Q PKGPATH -a > pkgs_i_want_to_have
Remove all the packages:
# pkg_info -a | sed 's/ .*//' | tail -r | while read p ; do pkg_delete $p ; done
(There are many ways to do this.)
Then edit your "pkgs_i_want_to_have" (created above) as needed. And reinstall just those from it:
# cat pkgs_i_want_to_have | (while read pp ; do cd /usr/pkgsrc/$pp ; make && make install ; done)
An alternative way to choose the packages you want installed is to create your own custom meta-package. A meta-package doesn't install any files itself, but just depends on other packages (usually within a similar topic or need). Have a look at pkgsrc/meta-pkgs category for various examples. If your new meta-package is generic enough and useful for others, please be sure to share it.
different computer
One can use another computer (or a VM) and build packages on it, and then e.g. use pkgin. That computer can use any of the methods, with the benefit that until you have a complete set of packages (relative to what you need), you can refrain from changing the operational systems.
chroot environment
This is basically the same as using another computer, except that a chroot is lighter weight than a VM.
Manually setup a directory containing your base operating system (including compilers, libraries, shells, etc). Put a copy of your pkgsrc tree and distfiles into there or use mount to mount shared directories containing these. Then use the "chroot" command to chroot into that new directory. You can even switch users from root to a regular user in the new environment.
Then build and remove packages as you wish with out affecting your real production system. Be sure to create packages for everything.
Then use other technique from this list to install from these packages (built in chroot).
Or instead of using this manual method, use pkgtools/mksandbox, or (older) pkg_comp's chroot.
pbulk
See pkgtools/pbulk. This is the standard approach for building packages in bulk. It can be configured to build only packages you want, rather than all of them; building all of them can take between most of 24h, if you have an enormously powerful machine, to 6 months, if you are retrocomputing.
pkg_comp
Apart from the examples in the man page, it's necessary supply a list of packages you want to build. The command 'pkg_info -u -Q PKGPATH' will produce a list of packages you explicitly requested be installed; in some strong sense, it's what you want to rebuild.
After you've built new new packages, you need the list of files to reinstall. Assume that you saved the output of 'pkg_info -u -Q PKGPATH' in /etc/pkgs. The following script will produce the name of the binary packages:
while read x
do
cd /usr/pkgsrc/$x && make show-var VARNAME=PKGNAME
done < /etc/pkgs
If you use PKGBASE instead of PKGNAME, you get the basename of the file.
See this BSDFreak Article for a nice tutorial on how to set up and use pkg_comp.
bulk build framework
NB: This section is historical and probably should be deleted; see pbulk above.
Use the scripts in pkgsrc/mk/bulk/, e.g. as pointed out in http://www.netbsd.org/Documentation/pkgsrc/binary.html#bulkbuild.
To go easy on the existing pkgsrc installation, creating a sandbox (automated chroot environment) is highly recommended here: http://www.netbsd.org/Documentation/pkgsrc/binary.html#setting-up-a-sandbox.
You can later mount the pkgsrc/packages/ via NFS wherever you want and install them like:
set PKG_PATH /mnt/packages/All && pkg_add <pkg>
Or upload them on a www-site and pkg_add http://www.site/packages/All/
wip/distbb-git - distributed bulk builds
Using wip/distbb-git you may build packages in parallel using several machines and/or chroots. Read PREFIX/share/doc/distbb/README file for the instructions.
pkgdepgraph
Look at the EXAMPLES in the man page.
TODO: how to do this using binary packages only? Put in section above.
Alternative LOCALBASE and PKG_DBDIR
Use an alternative LOCALBASE setting to install the packages under a new location and an alternate PKG_DBDIR for your alternate database of installed packages.
You can choose the PKG_DBDIR via shell environment variable or using the -K switch with any of the standard pkg_* tools.
And set the LOCALBASE and your PKG_DBDIR in your mk.conf file.
You could also simply just have a symlink from /usr/pkg to your new LOCALBASE (and /var/db/pkg to your new PKG_DBDIR) and change this when ever you are ready.
This article describes by examples how a ?pkgsrc developer can update a package to a later version.
You may be looking for ?How to update a package with pkgsrc
Contents
Preparation
Please install pkgtools/pkgdiff. This tools helps creating patches for pkgsrc. It contains pkgvi, mkpatches and patchdiff.
Trivial: games/gogui from 0.9.1 to 0.9.3
Part 1: Get the new package to build
- Change the version number in the DISTNAME variable to 0.9.3.
- Run make mdi to download the new distfile and regenerate the checksums.
- Run make.
- Wait until the build has finished ...
Part 2: Install the new package
- Run make deinstall to remove the old package.
- Run make install PKG_DEVELOPER=yes, hoping that the list of installed files has not changed.
- There is no document describing the changes. Only on the SourceForge project page, there are two comments about minor bugfixes. Keep them in mind for later.
- The package has installed without any failures. That's good.
- Run make package clean to finish the work.
Part 3: Committing the update
- Run pkglint -Wall to see if the package looks good.
Run cvs ci to commit the update.
- The text of the commit message is Updated gogui to 0.9.3, followed by an empty line, followed by a comment about the changes.
Run (cd ../../doc && cvs up)
- Run make changes-entry
Run (cd ../../doc && cvs ci CHANGES-*).
- The text of the commit message is Updated gogui to 0.9.3.
Simple: devel/rapidsvn from 0.9.3 to 0.9.4
Part 1: Get the new package to build
- Change the version number in the DISTNAME variable to 0.9.4.
Run make mdi to download the new distfile and regenerate the checksums.
... fetch: Unable to fetch expected file rapidsvn-0.9.4.tar.gz
Look at the homepage (see HOMEPAGE) to see where the distfile is.
- Change MASTER_SITES accordingly.
- Run make mdi again.
Run make and hope that the old patches still apply.
... => Applying pkgsrc patches for rapidsvn-0.9.4 1 out of 1 hunks failed--saving rejects to configure.rej Patch /home/roland/proj/NetBSD/pkgsrc/devel/rapidsvn/patches/patch-ab failed ERROR: Patching failed due to modified or broken patch file(s): ERROR: /home/roland/proj/NetBSD/pkgsrc/devel/rapidsvn/patches/patch-ab
They don't.
- Look at patches/patch-ab to see which file it is applied to. It's the configure script, so have a look at that.
- Hmmm, it looks quite different. Let's see if the old line still exists.
- Look for "$GXX" ==. It's in line 19749.
- So we need to remove one of the equals characters, since that's what the patch did.
- Change to the work directory ($WRKOBJDIR) and edit the file(s) you need to patch with pkgvi, make the changes and save it. pkgvi will automatically create a diff.
- Change back to the package directory (e.g. .../pkgsrc/devel/rapidsvn/ and run mkpatches. This will create the new patches from the new diffs in $WRKOBJDIR/.newpatches/
- Now copy the new patch(es) from $WRKOBJDIR/.newpatches/ to
patches/
in the package directory. - Run make patches to regenerate the patches from the working directory.
- Look at the patches to see if they contain more than they should, or expanded strings (for example, /usr/pkg instead of @PREFIX@.
- Run make mps to regenerate the checksums.
- Run make clean; make to try again.
- In the mean time, create a patch for configure.in, since that's the source of the code which got patched in the configure script.
- Wait until the build has finished ...
Part 2: Install the new package
- Run make deinstall to remove the old package.
- Run make install PKG_DEVELOPER=yes, hoping that the list of installed files has not changed.
- Meanwhile, look at the NEWS, ChangeLog and CHANGES files in the WRKSRC to see what has changed between 0.9.3 and 0.9.4. Save that text to later include it in the commit message.
- The package has installed without any failures. That's good.
- Run make package clean to finish the work.
Part 3: Committing the update
- Run pkglint -Wall to see if the package looks good.
Run cvs ci to commit the update.
- The text of the commit message is Updated rapidsvn to 0.9.4, followed by an empty line, followed by the text from the CHANGES file.
Run (cd ../../doc && cvs up)
- Run make changes-entry
- Edit ../../doc/TODO to remove the line of the requested update.
Run (cd ../../doc && cvs ci CHANGES-* TODO).
- The text of the commit message is Updated rapidsvn to 0.9.4, without all the details.
Part 4: Sending the patches to the upstream authors
- Look at the homepage of the project to see where to send bug reports.
- Register an account, if needed.
- Submit a bug report. http://rapidsvn.tigris.org/issues/show_bug.cgi?id=497
- That's all.
You can easily save all of your currently installed NetBSD packages for use on other machines by using pkg_tarup. Here's what to do.
- Install pkg_tarup from pkgsrc. ie.. cd /usr/pkgsrc/pkgtools/pkg_tarup ; make install (as root)
- Now you can now use pkg_tarup to create a tarball package from any currently installed package. Just do pkg_tarup [packagename]
Here is an example of how powerful this technique can be. With only a few lines of shell code we can put all the installed packages into a directory:
cd /var/tmp
mkdir packages
cd packages
for PKGNAME in `pkg_info -e "*" | sort`; do
echo "Packaging $PKGNAME"
pkg_tarup -d "." "$PKGNAME" || echo "WARNING: Packaging $PKGNAME failed." 1>&2
done
Contents
No X installed
Assuming you have just made a clean install of NetBSD and you have not installed the X11 sets, you should do the following:
/etc/mk.conf
Edit your /etc/mk.conf and add the following line:
X11_TYPE=modular
Install xorg
Provided that your pkgsrc tree lies in /usr/pkgsrc
, type:
# cd /usr/pkgsrc/meta-pkgs/modular-xorg
# make install
Configure Xorg
Usually no configuration is necessary. To start X you use:
$ startx
Additional configuration
Additional adjustments can be made in ~/.xinitrc and ~/.Xresources. For example, you may want to use a different terminal emulator font, such as pkgsrc/fonts/overpass.
~/.Xresources
To do so, we can add this line to ~/.Xresources:
*font: xft:overpass mono:size=12
For this to take effect, we must do merge it into the database.
$ xrdb -merge ~/.Xresources
To do this at every X startup, we can add it to ~/.xinitrc.
~/.xinitrc
An example ~/.xinitrc to run a window manager and configure keyboard layout:
xrdb merge ~/.Xresources # respect ~/.Xresources configuration
setxkbmap -option grp:alt_shift_toggle us,il # two keyboard options, US keyboard and IL,
# with alt+shift as a toggle option
openbox # run your favourite window manager.
Installing a PostgreSQL Server under NetBSD is pretty easy. The recommended way is to install the prebuilt postgresql binaries. The PostgreSQL Server depends on the postgreSQL Client. Contents
Setting PKG_PATH
Setting PKG_PATH enables us to easily download and install packages and it's dependencies.
# export PKG_PATH=https://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/$(uname -m)/$(uname -r)/All/
or if you are using csh:
# setenv PKG_PATH https://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/$(uname -m)/$(uname -r)/All/
You should keep this in your .profile, .bash_profile, if you mainly using binaries.
Installing the PostgreSQL Server
# pkg_add -v postgresql92-server
This will install the postgresql client and the server and any missing dependency (readline, perl) and will add the user pgsql to your user database.
That's it. Almost.
Configuring the Server
Please copy the pgsql example script to /etc/rc.d/
# cp /usr/pkg/share/examples/rc.d/pgsql /etc/rc.d
If you want to keep the database cluster other place than the default location, just change the user pgsql's home directory, before proceeding to the initialisation:
# usermod -d /var/db/pgsql pgsql
This directory must be owned by pgsql:pgsql.
and then
Starting the Server
If you want to initialise the database with a local other than "C", for example with unicode, invoke this before starting postgresql for the first time:
# /etc/rc.d/pgsql initdb -E unicode
please start the server entering:
# /etc/rc.d/pgsql start
This will create all necessary initial databases on the first start.
Start on every boot
To start the server on every boot add
pgsql=yes
to your /etc/rc.conf
Creating an example Database
To create a database switch to the user pgsql
# createdb -e -h 127.0.0.1 -U pgsql newdbname
and create the database
$ createdb testdb
$ exit
Using the Database
Postgres provides a Tool to manage the Database called psql.
# psql -U pgsql testdb
Welcome to psql 8.0.4, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
testdb=#
PHP and PostgreSQL
You may wish to install the postgres Module for PHP, e.g. for PHP 7.0 use:
# pkg_add -v php70-pgsql
Have fun.
Additional Information
- pkg_add(1) Manpage
- su(1) Manpage
Installing a MySQL server is easy. The fastest way is to install the binary package. If you wish to use Apache and PHP, please read the article How to install a LAMP Server.
# pkg_add -v mysql-server
If you decide to install via pkgsrc, because of a newer version, just enter:
# cd /usr/pkgsrc/databases/mysql57-server/
# make install clean
It will automatically compile the client and the server.
That's it... almost. Now copy the rc.d script into /etc/rc.d/:
# cp /usr/pkg/share/examples/rc.d/mysqld /etc/rc.d/
and start the MySQL server via NetBSD's rc.d(8) framework. Only one modification is needed to rc.conf(5):
# echo "mysqld=yes" >> /etc/rc.conf
If you want to copy the rc.d scripts automatically with pkgsrc, you can use:
PKG_RCD_SCRIPTS=YES
into mk.conf(5).
If MySQL is not starting up, you may need to create and set permissions on the /var/mysql directory and set up the default MySQL DB:
# mkdir /var/mysql
# mysql_install_db
# chown -R mysql:mysql /var/mysql
The default MySQL server database root password is auto-generated and marked expired upon creation. For security reasons, you should set your root password as soon as possible.
If you need information about the MySQL server package, use the following:
# pkg_info mysql-server
Have fun.
See also
LAMP is a an acronym for a combined set of software to run a web server containing the following software products: Apache, MySQL, and Perl, Python, or PHP. The "L" stands for Linux, therefore there is also an acronym named WAMP representing the Windows operating system. This also means that the title of this article is misleading. The approach is to install the same combined set of software, but using NetBSD as the operating system instead of Linux.
In the following examples, we will install all components using pkgsrc, building all packages from source.
Contents
Installing the Apache web server
The new Apache 2.4 server comes with two different threading models from which prefork is installed by default. It is not recommended to use the Worker model, if you wish to use Apache and PHP. As that is the case, we will install a default Apache 2.4 server.
# cd /usr/pkgsrc/www/apache24
# make install clean clean-depends
This will install the Apache 2.4 server and all its dependencies. If your build was successful, you should now edit the Apache configuration file /usr/pkg/etc/httpd/httpd.conf
to fit your needs. At least set the Listen
Attribute and your ServerName
. Please ensure that if your machine's hostname does not globally resolve, to put it into your /etc/hosts
file, otherwise Apache will refuse to start.
If you wish to start the Apache web server at boot time, please copy the rc.d example script from /usr/pkg/share/examples/rc.d/apache
to /etc/rc.d
and then add apache=yes
to your /etc/rc.conf
file.
# cp /usr/pkg/share/examples/rc.d/apache /etc/rc.d
If you want to copy the rc.d scripts automatically with pkgsrc, you can use:
PKG_RCD_SCRIPTS=YES
in your /etc/mk.conf
You can now start, stop, and restart the Apache web server using apachectl, or using boot script /etc/rc.d/apache
To start the server enter:
# apachectl start
or
# /etc/rc.d/apache start
To stop the server, substitute start with stop. If you're running a production server, pay attention to the apachectl graceful option.
Installing MySQL
You can skip this part, if you don't want to install a MySQL server. To install the MySQL server enter:
# cd /usr/pkgsrc/databases/mysql57-server
# make install clean clean-depends
This will install the MySQL server and all its dependencies, like the MySQL client.
Configuring the MySQL server
Please copy the example start script to /etc/rc.d
# cp /usr/pkg/share/examples/rc.d/mysqld /etc/rc.d
and add mysqld=yes to your /etc/rc.conf
You can now start, stop, and restart the MySQL server using
# /etc/rc.d/mysqld start
to start and respectively stop and restart.
The default MySQL server database root password is auto-generated and marked expired upon creation. For security reasons, you should set your root password as soon as possible.
You can pass most of the options to the server via the file /etc/my.cnf. If you want the server to listen only on localhost, for instance, create /etc/my.cnf and add
[mysqld]
port=3306
bind-address=127.0.0.1
and restart your MySQL server. To check if your MySQL server is really listening only on localhost, use ?sockstat.
# sockstat -l
For many more options, consider reading the MySQL Documentation.
Installing the PHP module for Apache
# cd /usr/pkgsrc/www/ap-php
# make install clean
This will install by default the latest version of PHP 7.x and the PHP7 module for Apache 2.4
Configuring PHP
You should now add the LoadModule and the PHP Handlers definitions to your Apache Configuration File /usr/pkg/etc/httpd/httpd.conf
Add following lines:
LoadModule php7_module /usr/pkg/lib/httpd/mod_php7.so
and
AddType application/x-httpd-php .php
and if you wish
DirectoryIndex index.html index.php
Installing the MySQL module for PHP
This step is important and enables you to make MySQL database connections from your PHP script.
cd /usr/pkgsrc/databases/php-mysql/
make install clean
Now edit /usr/pkg/etc/php.ini
and add the line
extension=mysql.so
You need this to enable MySQL functions in your PHP module.
Now restart your Apache web server. To test if PHP is working, create a small file called test.php in your document root directory, which is by default /usr/pkg/share/httpd/htdocs
, containing only one line with the function phpinfo().
<?php phpinfo(); ?>
If you use PHP7 and wish to use short tags like <? phpinfo() ?>
, then edit your /usr/pkg/etc/php.ini
file and change option short_open_tag = Off
to On
to make this line work. In PHP7 short_open_tag is off by default.
Open your browser and point it to this URL:
http://127.0.0.1/test.php
You should now see a website with information regarding your PHP installation and a table named mysql, in the middle of the document, with MySQL information.
That's it. You can now install software like a phpMyAdmin, or a Wiki. Have fun.
See also
Commands
- ?sockstat
First, I bootstrapped pkgsrc into the directory /usr/pkg/2006Q4.
$HOME/proj/pkgsrc/bootstrap/bootstrap --prefix=/usr/pkg/2006Q4 --unprivileged --compiler=sunpro
mkdir /usr/pkg/2006Q4/etc
mv work/mk.conf.example /usr/pkg/2006Q4/etc/mk.conf
Then I made a backup of that directory. You can never know.
cd /usr/pkg && gtar cfz 2006Q4.tar.gz 2006Q4
Since in a previous try, the pre-build program had removed the pkgsrc directory, I just commented it out by inserting an exit 0 in line 2. Since now there is no danger that anything in that directory is deleted, I could edit the mk.conf file and adjust some local settings.
/usr/pkg/2006Q4/mk.conf
# Example /usr/pkg/2006Q4/etc/mk.conf file produced by bootstrap-pkgsrc
# Tue Jan 9 13:01:49 CET 2007
.ifdef BSD_PKG_MK # begin pkgsrc settings
PKGSRC_COMPILER= sunpro
UNPRIVILEGED= yes
PKG_DBDIR= /usr/pkg/2006Q4/var/db/pkg
LOCALBASE= /usr/pkg/2006Q4
VARBASE= /usr/pkg/2006Q4/var
PKG_TOOLS_BIN= /usr/pkg/2006Q4/sbin
PKGMANDIR= man
TOOLS_PLATFORM.awk?= /usr/pkg/2006Q4/bin/nawk
TOOLS_PLATFORM.sed?= /usr/pkg/2006Q4/bin/nbsed
FETCH_CMD= /usr/pkg/2006Q4/bin/ftp
TOOLS_PLATFORM.pax?= /usr/pkg/2006Q4/bin/pax
TOOLS_PLATFORM.tar?= /usr/pkg/2006Q4/bin/tar
TOOLS_PLATFORM.mtree?= /usr/pkg/2006Q4/sbin/mtree
DISTDIR= /usr/pkg/distfiles
PACKAGES= /usr/pkg/2006Q4-packages
PKG_DEVELOPER= yes
CC= cc
CXX= CC
CPP= cc -E
CXXCPP= CC -E
SUNWSPROBASE= /local/SUNWspro
USE_LANGUAGES+= c99
TOOLS_PLATFORM.nroff= /opt/bin/groff
.endif # end pkgsrc settings
/usr/pkg/2006Q4/build.conf
# $NetBSD: how_to_do_an_unprivileged_bulk_build_on_solaris.mdwn,v 1.2 2012/02/05 07:14:36 schmonz Exp $
#
osrev=`uname -r`
arch=`uname -m`
USR_PKGSRC="$HOME/proj/pkgsrc"
MAKECONF="/usr/pkg/2006Q4/etc/mk.conf"
PRUNEDISTFILES=no
NICE_LEVEL="nice -n 20"
LINTPKGSRC_CACHE=no
ADMIN="1illig@informatik.uni-hamburg.de"
ADMINSIG="- Roland"
REPORTS_DIR="/usr/pkg/2006Q4-pkgstat"
REPORT_BASEDIR=`date +%Y%m%d.%H%M`
REPORT_HTML_FILE="report.html"
REPORT_TXT_FILE="report.txt"
REPORTS_URL="file://$REPORTS_DIR"
UPDATE_VULNERABILITY_LIST=yes
PRUNEPACKAGES=yes
MKSUMS=yes
MKSUMMARY=no
RSYNC_DST=ftp.NetBSD.org:/pub/NetBSD/packages/pkgsrc-200xQy/NetBSD-a.b.c/i386
RSYNC_OPTS='-e ssh'
I began to setup the bulk build environment in a screen.
$ screen -S <ulk-2006Q4
$ sh ./proj/pkgsrc/mk/bulk/build -c /usr/pkg/2006Q4/etc/build.conf
It seems to work, and I'm waiting for the package databases to be generated. But since I have a T2000 here, I'm thinking about parallelizing the work. For generating the databases, the two "make" processes could be run completely in parallel, making use of 12000 processors.
In the main phase, all packages that have the exact same dependencies can be built in parallel, but must be still installed one after another.
Help, my files are being removed
When I had started the bulk build, it complained that it couldn't find /usr/pkg/2006Q4/bin/nawk anymore. But looking at the backup I made above, I say that it had been there after bootstrapping. The simple solution to this was to change the tools definitions in mk.conf so that they point to tools outside ${LOCALBASE}:
TOOLS_PLATFORM.awk= /usr/pkg/current/bin/nawk
TOOLS_PLATFORM.sed?= /usr/pkg/current/bin/nbsed
FETCH_CMD= /usr/pkg/current/bin/ftp
TOOLS_PLATFORM.pax?= /usr/pkg/current/bin/pax
TOOLS_PLATFORM.tar?= /usr/pkg/current/bin/tar
TOOLS_PLATFORM.mtree?= /usr/pkg/current/sbin/mtree
Luckily, bin/bmake is not registered as belonging to any package, so it hasn't been removed yet.
The bulk build also tried to deinstall the infrastructure packages, so I had to protect them against that:
BULK_PREREQ+= pkgtools/bootstrap-mk-files
BULK_PREREQ+= pkgtools/tnftp
BULK_PREREQ+= pkgtools/mtree
BULK_PREREQ+= pkgtools/pax
BULK_PREREQ+= pkgtools/pkg_install
BULK_PREREQ+= sysutils/mtree
BULK_PREREQ+= sysutils/checkperms
First, I bootstrapped pkgsrc into $HOME/bulk:
$ cd
$ env \
CC=cc \
CXX=CC \
MIPSPROBASE=$HOME/mipspro-wrapper \
./proj/pkgsrc/bootstrap/bootstrap \
--prefix=$HOME/bulk \
--unprivileged \
--compiler=mipspro \
--quiet
...
Then, make a backup copy of LOCALBASE:
$ tar cfz bulk.tar.gz bulk
It is a good idea to store all configuration files outside of LOCALBASE:
$ mkdir bulk-etc
$ cp bulk/etc/mk.conf bulk-etc/.
Like on Solaris, I needed to comment out the pre-build script. The configuration files look similar to the ones from the Solaris build.
Contents
build.conf
osrev=`uname -r`
arch=`uname -m`
USR_PKGSRC="$HOME/proj/pkgsrc"
MAKECONF="$HOME/bulk-etc/mk.conf"
PRUNEDISTFILES=no
NICE_LEVEL="nice -n 20"
LINTPKGSRC_CACHE=no
ADMIN="rillig@localhost"
ADMINSIG="- Roland"
REPORTS_DIR="$HOME/bulk-reports"
REPORT_BASEDIR=`date +%Y%m%d.%H%M`
REPORT_HTML_FILE="report.html"
REPORT_TXT_FILE="report.txt"
REPORTS_URL="file://$REPORTS_DIR"
UPDATE_VULNERABILITY_LIST=no
PRUNEPACKAGES=no
MKSUMS=no
MKSUMMARY=no
RSYNC_DST=none
RSYNC_OPTS=none
mk.conf
# Example /usr/people/rillig/bulk/etc/mk.conf file produced by bootstrap-pkgsrc
# Wed Feb 21 07:42:55 EST 2007
.ifdef BSD_PKG_MK # begin pkgsrc settings
OPSYS= IRIX
ABI= 64
PKGSRC_COMPILER= mipspro
UNPRIVILEGED= yes
PKG_DBDIR= /usr/people/rillig/bulk/var/db/pkg
LOCALBASE= /usr/people/rillig/bulk
VARBASE= /usr/people/rillig/bulk/var
PKG_TOOLS_BIN= /usr/people/rillig/bulk/sbin
PKGMANDIR= man
TOOLS_PLATFORM.install?= /usr/people/rillig/pkg/bin/ginstall
TOOLS_PLATFORM.sed?= /usr/people/rillig/bulk/bin/nbsed
FETCH_CMD= /usr/people/rillig/bulk/bin/ftp
TOOLS_PLATFORM.pax?= /usr/people/rillig/bulk/bin/pax
TOOLS_PLATFORM.tar?= /usr/people/rillig/bulk/bin/tar
TOOLS_PLATFORM.mtree?= /usr/people/rillig/bulk/sbin/mtree
IMAKEOPTS+= -DBuild64bit -DSgiISA64=4
DISTDIR= /usr/people/rillig/distfiles
PKG_DEVELOPER= yes
CC= cc
CXX= CC
CPP= cc -E
CXXCPP= CC -E
MIPSPROBASE= /usr/people/rillig/mipspro-wrapper
TOOLS_PLATFORM.nroff= /usr/people/rillig/pkg/bin/groff
BULKFILESDIR= /usr/people/rillig/bulk-logs
PACKAGES= /usr/people/rillig/bulk-packages
WRKOBJDIR= /usr/people/rillig/bulk-tmp
.endif # end pkgsrc settings
To get a usable environment, I have built some package before in $HOME/pkg, so that I can link to them (for example ginstall and groff).
$ screen -S bulk
$ sh proj/pkgsrc/mk/bulk/build -c /usr/people/rillig/bulk-etc/build.conf
...
BULK> Package bzip2-1.0.4 not built yet, packaging...
/usr/people/rillig/pkg/bin/bmake bulk-package PRECLEAN=no
###
### Wed Feb 21 08:56:12 EST 2007
### pkgsrc build log for bzip2-1.0.4
###
bmake: exec(/bin/sh) failed (Arg list too long)
*** Error code 1
Grmpf. IRIX has only 20k for the command line and the environment, and that gets filled pretty quickly. Let's see what's the cause for that. It's the command starting with "${RUN} set +e;" and going until the lonely "fi" in line 534. That's pretty much. Since I don't know how to break that into smaller chunks, I stop here.
Some hours later
I've made some progress. I didn't rewrite the bulk builds but simply patched bmake to create a temporary file and write the overly long command lines there when an execve fails due to E2BIG. I applied this patch in the devel/bmake directory and re-ran bootstrap. Things seem to work now.
At least, lang/perl is building, which takes some time. Not to speak of the database generation that follows. Meanwhile, I'm adding the following to the mk.conf file, just as in the Solaris bulk build:
BULK_PREREQ+= pkgtools/bootstrap-mk-files
BULK_PREREQ+= pkgtools/tnftp
BULK_PREREQ+= pkgtools/mtree
BULK_PREREQ+= pkgtools/pax
BULK_PREREQ+= pkgtools/pkg_install
BULK_PREREQ+= sysutils/mtree
BULK_PREREQ+= sysutils/checkperms
See also
When working with pkgsrc, pkgsrc leaves working directories called work in the application directory. If you invoke the build with clean, then this directory is cleaned, but not the working directories of the dependencies. To avoid this problem, you should set DISTDIR and WRKOBJDIR in mk.conf.
# make install clean
If you want to clean the directories of the dependencies aswell you have to run:
# make install clean clean-depends
Either way, sometimes it is not possible or wanted to clean the working directories. You can clean them all using this one line shell command:
# find /usr/pkgsrc -name work -exec rm -r {} +
or this one:
# find /usr/pkgsrc -maxdepth 3 -mindepth 3 -name work -exec rm -r {} +
You can also change the place where the work directories are created by setting WORKOBJDIR in /etc/mk.conf.
You can clean them using make clean in the parent directory, but this is not advised. It takes a very long time. Using pkgclean is another option.
See also
- How to use pkgsrc
- ?rm
- ?find
Contents
Questions & Answers
How can I get a list of all ?make variables that are used by pkgsrc?
That's difficult. But you can get a very good approximation by changing to a package directory and running the following command:
make -dv show-var VARNAME=VARNAME \
| sed -n 's,^Global:\([^ ]*\) =.*,\1,p' \
| sed 's,\..*,.*,' \
| sort -u \
| grep ^\[A-Z\] \
| less
Another possibility is to run bmake show-all. This will list many (but not all) variables.
If you need more information about a specific variable, run bmake help topic=VARNAME or have a look at pkglint's variable definition file
When patching a GNU-style configure script, where should I add changes?
If you want your changes to override everything else, then look for "ac_config_files=" and put it somewhere before that line.
I'm going to make incompatible changes to pkgsrc. Where should I document it?
In the file doc/CHANGES-*.
What's the difference between ${TEST}, test and [?
There is practically no difference. All the standard options are supported on all platforms. See also ?The pkgsrc portability guide.
See also
This howto explains how NetBSD current can use thumb mode on ARM architecture (evbarm).
Introduction
While normal ARM instructions are 32 bits wide, thumb provides a subset of ARM instructions in 16 bits, and thus reduces executable sizes both in memory and on filesystem 1. See Wikipedia for more details.
On large codebases like NetBSD, ARM and thumb mode code can, and must, co-exists. This is because some parts of ARM specific code can not be programmed with the 16 bit thumb instructions. Luckily GCC makes this relatively easy with the thumb-interwork option.
As an overview, most machine independent (MI) C code can be compiled to thumb mode and most machine dependent (MD) C and assembly code can only be compiled to ARM mode. Some parts of the ARM port specific code in NetBSD have support for thumb mode, but most of them do not.
NetBSD's CPUFLAGS build variable is used tell the compiler, when an object file is compiled to thumb mode and when thumb interworking needs to enabled. The GCC options are -mthumb and -mthumb-interwork. By default ARM target binaries are of course compiled to ARM mode.
In a large codebase like NetBSD it becomes difficult to manually check if any one object file can be compiled to thumb mode. Luckily brute force works with the help of make option -k, as in keep going even one object file does not compile. By compiling whole tree with CPUFLAGS=-mthumb and MAKEFLAGS=-k, all of the build time failing machine dependent object files can be found, and marked with the help of Per file build options override to be compiled to ARM mode with thumb interworking.
Build time failures are of course not the only things can go wrong with thumb support. At run time, some parts of the kernel or userspace are expected to be in ARM mode. In userspace, for example, this means that the dynamic linker (ld.elf_so), userspace locking and C runtime initialization (CSU) parts need to be in ARM mode.
Userspace
Userspace in current compiles to userspace, but doesn't work due to a linker bug 23.
If the binutils package is upgraded to 2.18.50 from 2.16.1, then thumb mode works for userspace. After this, only a build script marking the ARM and thumb mode files is needed.
Following patches do it all for current snapshot from Oct 22nd 2008:
After the patches have been applied on top of current, the build process continues like this:
build tools
./build.sh -U -O obj -m evbarm -j 3 tools
build userspace to thumb, note that the script expects to find tools directory under obj
./thumb.sh
build a normal ARM mode kernel, where CONFIG is the configuration file name like TISDP2420
./build.sh -U -O obj -m evbarm -j 3 kernel=CONFIG
The next step is to verify that the thumb mode userspace works by booting into it. On OMAP 2420 development board this requires setting up a TFTP server for the kernel executable netbsd.bin and then setting up the root filesystem over NFS (and configuring the kernel accordingly).
Known problems
While the thumb mode userspace boots and most programs and the shell work, some programs may have issues. Known regressions at this time are:
- thumb mode gcc segfaults on target
# cat > test.c << EOF
> int main(void){
> return 0;
> }
> EOF
# gcc --verbose -Wall test.c
Using built-in specs.
Target: arm--netbsdelf
Configured with: /usr/src/tools/gcc/../../gnu/dist/gcc4/configure --enable-longtThread model: posix
gcc version 4.1.3 20080704 prerelease (NetBSD nb1 20080202)
/usr/libexec/cc1 -quiet -v test.c -quiet -dumpbase test.c -auxbase test -Wall s#include "..." search starts here:
#include <...> search starts here:
/usr/include
End of search list.
GNU C version 4.1.3 20080704 prerelease (NetBSD nb1 20080202) (arm--netbsdelf)
compiled by GNU C version 4.1.3 20080704 (prerelease) (NetBSD nb1 20080.GGC heuristics: --param
ggc-min-expand=34 --param ggc-min-heapsize=7808
Compiler executable checksum: c67c46e1fc2de869e7015b2c172bd073
test.c: In function 'main':
test.c:3: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See for instructions.
- thumb mode gdb segfaults on target
# gdb --verbose /bin/cat
[1] Segmentation fault (core dumped) gdb --verbose /bin/cat
These issues may be solved by upgrading to newer versions of GCC compiler and gdb debugger.
Another problem is that newer binutils adds more symbols in thumb mode than older binutils, which makes thumb mode binaries file size larger than the with old binutils and larger than ARM mode binaries 7. These extra symbols only affect the file size and sections loaded to memory are smaller in thumb mode than in ARM mode. Also, all of these extra sections and symbols can be easily stripped by providing NetBSD build tools with a STRIPFLAG=-s build variable, though without this patch libraries are not stripped.
Contents
Supported hardware
Creative Music System
Very old and rare synthesizer.
See cms(4).
PC speaker
It has one-voice polyphony and sounds just awful. Useful only for testing MIDI input devices.
See pcppi(4).
Roland MPU-401
MIDI interface by Roland. It became popular thanks to excessive cloning.
Supported on many ISA cards, and following PCI cards:
- C-Media CMI8738 - cmpci(4) - support broken in NetBSD 4.0?
- ESS Solo-1 - eso(4)
- ForteMedia FM801 - fms(4)
- Yamaha DS-1 - yds(4)
Usually MPU interfaces are conncted to MIDI/Joystick port on sound cards. You won't be able to play/receive anything unless you connect some external MIDI device to such port. Though, in some rare cases MPU interface is connected to on-board/daughterboard WaveTable MIDI engine.
See mpu(4)
Simple MIDI interfaces
Simple MIDI interfaces are supported on many ISA cards, and following PCI cards:
- Cirrus Logic CS4280 - clcs(4)
- Creative Labs SoundBlaster PCI (Ensoniq AudioPCI based) - eap(4)
- Trident 4DWAVE and compatibles - autri(4)
Usually simple MIDI interfaces are connected to MIDI/Joystick port on sound cards. You won't be able to play/receive anything unless you connect some external MIDI device to such port.
Note: MIDI port and synth on SoundBlaster Live! and newer cards by Creative is unsupported.
USB MIDI devices
Many USB MIDI devices are supported. Synth modules, keyboards and MIDI interfaces are handled well.
See umidi(4)
Yamaha OPL2 and OPL3
Popular single-chip FM synthesizer. Almost all ISA cards come with such chip.
Some of the newer cards have compatbile FM engine too. PCI cards based on following chipsets have it:
- C-Media CMI8738 - cmpci(4) - opl support broken in NetBSD 4.0?
- ESS Solo-1 - eso(4)
- ForteMedia FM801 - fms(4)
- S3 SonicVibes - sv(4)
- Yamaha DS-1 - yds(4)
NetBSD opl driver has built-in General MIDI instrument definitions, so your system is ready to play without additional configuration.
Note: New PCI cards by Creative Labs do not have this chip.
See opl(4)
Identifying MIDI devices
You can easily discover what kind of MIDI devices are available - try grepping dmesg:
dmesg | grep midi
Sample output:
midi0 at pcppi1: PC speaker (CPU-intensive output)
midi1 at opl0: Yamaha OPL3 (CPU-intensive output)
umidi0 at uhub1 port 2 configuration 1 interface 1
umidi0: Evolution Electronics Ltd. USB Keystation 61es, rev 1.00/1.13, addr 2
umidi0: (genuine USB-MIDI)
umidi0: out=1, in=1
midi2 at umidi0: <0 >0 on umidi0
In this case three MIDI devices are detected - PC speaker, Yamaha OPL3 and USB MIDI device (Keystation 61es keyboard in this case).
Connecting MIDI devices
Connecting MIDI devices is very simple. For example if you want to drive OPL3 using USB MIDI keyboard try:
cat /dev/rmidi2 > /dev/rmidi1
You can now play :).
MIDI software for NetBSD
Utility called midiplay(1) comes with NetBSD.
Contents
Introduction
This page describes howto setup NetBSD to be able to use linux lvm tools and libdevmapper for lvm. For now my work was done on haad-dm branch in main netbsd repository I want to merge my branch back to main repository as soon as it will be possible.
LVM support has been merged to NetBSD-current on the 23rd of December 2008.
Details
Tasks needed to get LVM on NetBSD working
- Get latest sources
- Compile new kernel and tools
- Create PV, LV and VG's and enjoy
Howto update/checkout sources
You can checkout the latest sources with this command
$ export CVS_RSH="ssh"
$ export CVSROOT="anoncvs@anoncvs.se.netbsd.org:/cvsroot"
$ cvs checkout -dfP src
Or update by command
$ export CVS_RSH="ssh"
$ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
$ cvs update -dfP
There are only 3 directories which were changed in my branch
- sys/dev/dm
- external/gpl2/libdevmapper
- external/gpl2/lvm2tools
Howto setup NetBSD system to use lvm
The easiest way is to build distribution and sets. You need to build with flag MKLVM set to yes.
$ cd /usr/src $ ./build.sh -u -U -V MKLVM=yes tools distribution sets
After successful build update your system with them. You can also run make install (as root) in src/external/gpl2/lvm2 to install userland part of LVM into the existing NetBSD system. There is also simple driver used by LVM2 tools in our kernel in src/sys/modules/dm you have to install and load this driver before you test lvm. The NetBSD LVM uses same tools as Linux and therefore we have the same user interface as is used in many common linux distributions.
Howto compile new kernel
Kernel compilation procedure is described here http://www.netbsd.org/docs/guide/en/chap-kernel.html#chap-kernel-build.sh To get device-mapper compiled in kernel you have to add this option to kernel config file
pseudo-device dm
Using new MODULAR modules
There are two versions of modules in NetBSD now old LKM and new MODULAR modules. New modules are build in sys/modules/. All GENERIC kernels are compiled with support for new modules. There is reachover makefile for dm driver in sys/modules/dm. You can use it to build dm module.
For loading new style modules new module utils are needed. You need to add
MKMODULAR=YES
to /etc/mk.conf and build modload, modstat and modunload.
Compile lvm2tool and libdevmapper
To get lvm working it is needed to compile and install linux lvm tools. They are located in
- external/gpl2/libdevmapper
- external/gpl2/lvm2tools
Only make/make install is needed to build/install tools to machine. Tools are not integrated in to lists and build system now. Therefore it is possible that when you try to add them to build process it will fail(with error to many file in destdir).
Using lvm on NetBSD
lvm2tools are used to manage your lvm devices.
lvm pvcreate /dev/raw_disk_device # create Physical Volume
lvm vgcreate vg00 /dev/raw_disk_device #create Volume Group-> pool of available disk space.
lvm lvcreate -L20M -n lv1 vg00 # create Logical volume aka Logical disk device
newfs /dev/vg00/rlv1 # newfs without -F and -s doesn't work
mount /dev/vg00/lv1 /mnt # Enjoy
After reboot, you can activate all existing Logical Volumes (LV) in the system with the command:
lvm vgchange -a y
You can use:
lvm lvdisplay vgdisplay pvdisplay to see the status of your LVM devices.
You can also use lvm lvextend/lvreduce to change size of LV.
I haven't tested my driver on SMP system there are probably some bugs in locking so be aware of it and do not try to extend/reduce partition during I/O (something bad can happend).
TODO:
Review locking - I will probably allow only one ioctl call to be inside dm driver at time. DONE
Write snapshot driver -only skeleton was written yet.
Add ioctls needed to correct newfs functionality. I tried to implement them in device-mapper.c::dmgetdisklabel but it doesn't work yet. DONE
Write lvm rc.d script to enable LVM before disk mount so we can use lvm for system partitions. DONE
Contents
Overview
Apple Time Machine is a backup mechanism which uses an Apple sparse filesystem model, with extended attributes to store the files and directories of your OSX system as an incremental state. You can walk 'back in time' against the history of the filesystem and recover older versions of files, or files which have subsequently been deleted. Unlike a dump, its not a complete independent collection of files: it prunes back in time against what it knows has changed, and so cannot be relied on in the same way as offline disk or tape backup written in a rigorous system. Nor is it an archive: if you want to preserve something, you need to manage that independently of a time machine setup. But, having said that, its enormously useful and very user friendly, and lots of OSX users are very happy with it. Most of them use directly attached media like a firewire or USB disk, or an Apple appliance like the Time Capsule. However, it is technically possible to do Time Machine over a network, to a network-mounted filesystem.
NetBSD does not support HFS+ format filesystems directly in a way which can be exposed to an OSX host over the network. Normally, its UNIX filesystems are mounted on clients by protocols like NFS, or SMB, or AFP (Apple File Protocol) through either the built-in facilities of mount_nfs(8), mount_smbfs(8) or a package like netatalk. These are all provided by userspace daemons.
If you want to use these, there are documented ways to do this, such as the apple time machine freebsd in 14 steps page Rui Paulo wrote. They each have advantages and disadvantages, noting the need for special file support and extended attributes. Its probable that making the correct Apple sparse filesystem as a single file image, and moving this to the network-backed filestore gets round most of the problems, if you set the correct magic flag in your OSX to permit non-standard filesystems to be used to 'home' the time machine.
defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1
However, the NetBSD iSCSI implementation is robust, and efficient, and will provide arbitrary client-side filesystems (such as HFS+, or Windows filesystems) because its presenting SCSI disk as raw blocks. These raw blocks are typically provided from a sparse file, in the NetBSD filesystem tree.
iSCSI talks about targets (which is what a provider of an iSCSI disk is) and initiators (which is what a client connecting to a given target is). -In this situation, NetBSD will be running as the target, via the iscsi-target(8) daemon. The client has to have an initiator, which can present the target as a device.
In order to use this on OSX, you need an iSCSI initiator. One which is freely available is Studio Network Solutions globalSAN iSCSI initiator for OSX. At the time of writing, the version which works for me on Snow Leopard is 3.3.0.43 which is distributed as a ZIP'ed DMG file.
Initialize a target
To create a target, you edit the targets(5) file.
The default example shipped with NetBSD 5.0 and later found in /etc/iscsi/targets
is:
#
# Structure of this file:
#
# + an extent is a straight (offset, length) pair of a file or device
# it's the lowest common storage denominator
# at least one is needed
# + a device is made up of one or more extents or other devices
# devices can be added in a hierachical manner, to enhance resilience
# + in this example, no device definitions are necessary, as the target
# will just use a simple extent for persistent storage
# + a target is made up of 1 or more devices
# The code does not support RAID1 recovery at present
# Simple file showing 1 extent, mapped straight into 1 target
# extent file or device start length
extent0 /tmp/iscsi-target0 0 100MB
# target flags storage netmask
target0 rw extent0 0.0.0.0/0
The target should be the name of a file in the filesystem you want it to reside in. When the iscsi-target daemon first runs over this configuration, targets are initialized as UNIX sparse files. Be very careful not to touch this file with a cp or any other operation which tickles NetBSD into filling in the 'holes' in the sparce file: dump and tar should be safe, as is rsync if the -S flag is given to it. (see sparse files below.)
The /etc/iscsi/target
file can implement simple ACL models for the IP network/prefix of the client(s) you wish to permit. The default in this file as distributed is a fully open iSCSI target. There is an associated /etc/iscsi/auth
file which can be used to instantiate CHAP and other protections over the iSCSI attachment by the client.
Be warned that the target name has to include a digit which will be translated to the SCSI 'LUN' number of the device for the client. Unless you know what you are doing, its best to leave this as 0 since the client may well demand only LUN0 is used.
Once you have configured a target, you need to enable and start the iscsi-target(8) daemon. Edit /etc/rc.conf
and enable iscsi-target=YES
, and then you can run /etc/rc.d/iscsi_target start
(it will start automatically on reboot once enabled in /etc/rc.conf
)
Install iSCSI initiator
- fetch the DMG, mount, install (reboot required)
Configure the initiator
- open 'System Preferences' and select globalSAN. It will throw a warning about restart. Don't worry, it just flicks you to the correct system Preference pane.
- select the '[Targets]' tab and use the [+] button to add a new target.
- provide the name or IP address of your NetBSD iSCSI target. If you didn't alter the port it serves from, don't alter the port here.
- select 'Header Digest' by default
- enable the mount. This will cause OSX to detect a new (raw) device, and ask you what to do with it. You should select [Initialize...] to enter the OSX 'disk utilities' pane. OSX is then initializing this SCSI LUN over the wire, using the iSCSI protocol to read and write the disk blocks. If you have a large back-end iSCSI target file, this operation will take quite a long time: its really just like newfs on a real raw disk, except its done a) over a network protocol and b) into a single file on the host/target filesystem.
- in Disk Utilities you can use any Volume Scheme you like. I selected one partition, of HFS+ with journalling. This is what a locally attached USB or firewire external disc is typically configured as, but 'its just a disk' -you can sub partition it any way you like.
If you look at this disk image on the target, its interesting:
$ ls -ltr /tmp/iscsi-target0
-rw-r--r-- 1 root wheel 104857600 Nov 18 22:45 /tmp/iscsi-target0
It reports as a large file (the 100mb of the /etc/iscsi/target file) but, its not. DF on the filesystem will show its not all occupied.
$ file /tmp/iscsi-target0
/tmp/iscsi-target0: x86 boot sector; partition 1: ID=0xee, starthead 254, startsector 1, 204799 sectors, extended partition table (last)\011, code offset 0x0
$
Its detected as a file which contains a boot sector and disk structure. In principle you could probably use vnconfig to mount and examine this file. Not advised if its being served by an iscsi-target daemon...
Thats it! You can enable automatic mounting, or not (as you please) in the globalSAN preferences pane.
Once this is completed, the disk is a normal SCSI device, in HFS+ mode, and can be configured for any operation including Time Machine.
Issues
- It can be slow. Bear in mind that the speed you get for iSCSI is strongly limited by the client's ethernet speed, and any switching fabric you have. Real SAN use dedicated high speed switches and protocols like infiniband to maximise disk throughput. if you do iSCSI on a home network, its not going to look like your corporate SAN at work.
- Wireless is a poor choice of network to initialize your time-machine backup on: in my case, the first dump is 160GB and this would not complete in a sensible time.
- Even on a wired network, a home router is going to throttle you. I had 100mbit on my server and 100mbit on my router-switch combo, and achieved throughput of the order 10GBytes/hr which is 2MBytes/sec. thats well below apparent switch speed. (I will be experimenting with back-to-back connection)
- raidframe also imposes speed constraints. I was using a software RAID of 1+0 over (notionally) 6 SATA disks, but two of the mirror-sets were running in 'degraded' mode.
Sparse Files
A sparse file is a special file-type which reports its 'size' in ls -l as the total it could be, if it was fully populated. However, until actual file blocks at given offsets are written, they aren't there: its a linked list. This permits a very fast growable file to a limit, but at some risk: if you accidentally touch it the wrong way, it fills in the holes. You just have to be careful. FTP for instance, doesn't honour sparse files. if you copy a file with FTP, the holes are filled in.
Sparse files were implemented in UNIX FFS a long time ago, and are in HFS+ as well. The behaviour of the tools like pax, rsync, tar, ftp has to be verified on a case-by-case basis. Its not clear if NetBSD pax(1) is safe or not. rsync has the -S flag to preserve sparse files.
Contents
Using the FUSE in NetBSD -current
Requirements
You will need a current kernel and userland, and up to date pkgsrc-current (pkgsrc/filesystems)
Introduction
The sources we refer to in this HowTo can exist anywhere on your system (normally in /usr)
That's why we use src/foo/bar (i.e src/lib/libpuffs) instead of full path.
Setup
Make sure that your Kernel configuration file contains the following option under filesystem
file-system PUFFS
And this under pseudo-devices
pseudo-device putter
Add this to your /etc/mk.conf
MKPUFFS=yes
Build your kernel and update your userland.
Make sure you run "make includes" in your source tree in src/sys if you at some point update your NetBSD sources so that you have proper header files.
Most importantly the src/lib/libpuffs headers.
# cd src/sys
# make USETOOLS=no includes
# cd src/lib/libpuffs
# make USETOOLS=no includes
Check if libpuffs is properly installed
# cd src/lib/libpuffs
# make USETOOLS=no cleandir dependall
# make USETOOLS=no install
Check if librefuse is properly installed.
# cd src/lib/librefuse
# make USETOOLS=no cleandir dependall
# make USETOOLS=no install
Check if fusermount is up to date and installed
# cd src/usr.sbin/fusermount
# make USETOOLS=no cleandir dependall
# make USETOOLS=no cleandir install
Check if puffs is up to date and installed
# cd src/usr.sbin/puffs
# make USETOOLS=no cleandir dependall
# make USETOOLS=no install
This will compile and install mount_9p, mount_portal, mount_psshfs and mount_sysctlfs
Install and Usage
Mount_psshfs
Mounting a remote filesystem over ssh
# mount_psshfs user@host:/path/to/directory /mountpoint
# umount /mountpoint
Stuff from /usr/pkgsrc/filesystems.
Fuse-ntfs-3g
Mounting a ntfs filesystem
# ntfs-3g /dev/device /mountpoint
# umount /mountpoint
Fuse-obexftp
Mounting an obexfs filesystem (Make sure your bluetooth connection is established with your device)
# obexfs -b 00:11:22:33:44:55 -B 10 /mnt/mountpoint
(Where the 00:11:22:33:44:55 is the address of your Bluetooth device )
Fuse-encfs
Fuse-cryptofs
Fuse-cddfs
Fuse-curlftps
Contents
THIS PAGE NEEDS AN UPDATE BECAUSE netbsd-10 vm.swap_encrypt=1, default on most platforms today, obsoletes swapping to cgd
Summary
It's getting more and more popular to use encrypted swap. This is however not a trivial task with nfs-swap. Swap over nfs is supported like this:
server:/usr/swapfile none swap sw,-w=8192,nfsmntpt=/swap 0 0
But this can not be encrypted. We will however cheat and use a vnd(4) on a nfs-share.
This is how I did it on my Jornada 680 running 3.99.15.
Things needed
A kernel with both vnd(4) and cgd(4) support.
Creation
Making the swapspace
First we need to create the swapfile to be used. It's important that the swapfile is in a directory that is mounted when /etc/rc.d/swap2 runs. Either add the directory to $critical_filesystems_remote, or just put it in /usr.
Now run:
# dd if=/dev/zero of=/usr/swapfile bs=1m count=64
This will create a 64MB swapfile. Make sure it has the right permissions and owner.
# chown root:wheel /usr/swapfile
# chmod 600 /usr/swapfile
Configuring the swapspace the first time
Now we just have to configure it so the system can use it.
Configure the paramsfile for cgd(4).
cgdconfig -g -o /etc/cgd/swapfile -V none -k randomkey blowfish-cbc
Now we can configure the device.
# vnconfig vnd0 /usr/swapfile
# cgdconfig cgd0 /dev/vnd0c /etc/cgd/swapfile
Replace /dev/vnd0c with /dev/vnd0d if necessary.
Disklabel the cgd with disklabel -I -e cgd0, it will should look something like this.
# /dev/rcgd0c:
type: cgd
disk: cgd
label: default label
flags:
bytes/sector: 512
sectors/track: 2048
tracks/cylinder: 1
sectors/cylinder: 2048
cylinders: 64
total sectors: 131072
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # microseconds
track-to-track seek: 0 # microseconds
drivedata: 0
3 partitions:
# size offset fstype [fsize bsize cpg/sgs]
c: 131072 0 swap # (Cyl. 0 - 63)
Note: Depending on which archictecture you use, you may need a different layout.
Like this on an i386:
a: 131072 0 swap # (Cyl. 0 - 63)
d: 131072 0 unused 0 0 # (Cyl. 0 - 63)
Depending on which partition your architecture uses as raw partition. If unsure, check with:
# sysctl kern.rawpartition
kern.rawpartion=3
Back it up so it can be used later.
# disklabel cgd0 > /etc/cgd/swapfile.disklabel
Use it (finally).
# swapctl -a /dev/cgd0c
Now you have working encrypted swap over nfs. To check its status:
# swapctl -l
Device 512-blocks Used Avail Capacity Priority
/dev/cgd0c 131072 9696 121376 7% 0
Use the swapspace at every reboot
Using this swapspace automatically at every reboot is a little tricky since it can not be put int /etc/fstab, but it can be done in another way. And I have already done the work for you. Check that the variables make sense on your system. E.g that you used vnd0 and cgd0 and RAW_PART is right for your architecture. Create the file /etc/rc.conf.d/swap containing the following.
# Initialize cgd over vnd swap, suitable for nfs-swap.
#
# Note: We can NOT put this swapfile in /etc/fstab, this is why
# this is relatively complicated.
#
# If this is the only swapspace you have configured then you can set
# no_swap=YES in rc.conf, otherwise the system will complain every boot.
#
# IMPORTANT:
# $swapfile has to be in $critical_filesystems_remote. /usr is by default
#
vnd_device="vnd0"
cgd_device="cgd0"
swapfile="/usr/swapfile"
paramsfile="/etc/cgd/swapfile"
swap_disklabel="/etc/cgd/swapfile.disklabel"
RAW_PART="c" # <- change to suit your arch
SWAP_PART="c" # <- change to same as the disklabel
start_postcmd="cryptovnd_swap"
stop_cmd="cryptovnd_stop" # Note: We have to override stop_cmd
cryptovnd_swap()
{
# Since there is only one swap-variable in rc.conf we have to
# check that we are being called from swap2.
if [ $name = "swap1" ]; then
return
fi
if [ -f $swapfile ]; then
echo "Configuring cgd over vnd swap."
eval `stat -L -s $swapfile`
if [ `echo $st_uid+$st_gid|bc` != 0 ]; then
echo "$swapfile MUST be owned by root and group wheel"
echo "$swapfile not used as swap."
return 1
else
if [ ! -f $swap_disklabel ]; then
echo "No $swap_disklabel."
echo "$swapfile can not be used as swap."
return 1
fi
if [ $st_mode != "0100600" ]; then
echo "$swapfile MUST have permission 600"
echo "$swapfile not used as swap."
return 1
fi
fi
vnconfig $vnd_device $swapfile
cgdconfig $cgd_device /dev/${vnd_device}$RAW_PART $paramsfile
disklabel -R -r $cgd_device $swap_disklabel
swapctl -a /dev/${cgd_device}$SWAP_PART
fi
}
cryptovnd_stop()
{
if [ $name = "swap2" ]; then
swapctl -d /dev/${cgd_device}$SWAP_PART
cgdconfig -u $cgd_device
vnconfig -u $vnd_device
swapctl -U -t noblk
else
swap1_stop
fi
}
Some issues and notes
- Do not include this cgd in /etc/cgd/cgd.conf
- It could happen that there isn't enough entropy in the kernel to initialize the swap partition. If so, you can add your NIC to the entropy pool in /etc/rc.conf with /sbin/rndctl -ced ne0 if you have a ne(4) NIC.
- If this is the only swapspace configured, set the variable no_swap=YES in /etc/rc.conf or the system will complain every boot.
Additional Information
- vnconfig(8) Manpage
- cgdconfig(8) Manpage
- swapctl(8) Manpage
- disklabel(8) Manpage
MTP mode
You can use fuse-gphotofs if you have a fuse/refuse capable NetBSD version to acces you player. You can also try libmtp which comes with command line tools, or simply gphoto.
UMS mode
You can try upgrading to the korean firmware to enable umass mode for your player.
What it does:
Offer a system to generate binary updates for NetBSD (NOT patches), install, remove with full "rollback to previous state" support, and manage them. Simple dependencies are allowed. It also allows auto-generating security advisories from update data.
Without further ado, here's how to use it.
Update producer: (people who create updates)
1. Write the .plist file to describe the update, and put it online.
Update the index file to include the update id. (for example,
NetBSD-UP2007-0001)
Examples for updates are available:
<http://www.NetBSD.org/~elad/haze/Updates/2006/NetBSD-UP2006-0001.plist>
<http://www.NetBSD.org/~elad/haze/Updates/2006/NetBSD-UP2006-0002.plist>
The index file looks like this:
<http://www.NetBSD.org/~elad/haze/Updates/2006/INDEX>
Note the directory hierarchy: the year is important; everything
else can be tuned, but the structure. For now.
2. Maintain a build directory for ports you want to generate updates
for. This is "obj-dir" in the config file (/etc/haze/haze.conf):
obj-dir /usr/netbsd/objdir/xphyre/destdir.%m
Haze knows to replace "%m" with the machine type. The default is
"/usr/obj/%s-%r/destdir.%m", where "%s" will be replaced by the
string "NetBSD", and "%r" will be replaced by the release, for
example, "4.0".
To keep this directory up-to-date, all the producer has to do is
run the build after the source is updated with the fix.
3. Maintain a list of "targets" you want to monitor. Haze calls a
combination of OS-release-machine a "target". For example,
"NetBSD-4.0-amd64" is a target. By default, Haze will only generate
updates for the host it runs on. You can override that, though:
generate-targets NetBSD-4.0-amd64 NetBSD-4.0-i386 NetBSD-3.0-amd64 NetBSD-3.0-i386
4. After the new files are built, generate updates. This is done using
the -G flag. For example, if you just rebuilt for
NetBSD-UP2007-0001, and want to generate updates for it:
haze -G -U NetBSD-UP2007-0001
The updates will show up in the output dir, /tmp by default, and
will be in the form of NetBSD-UP2007-0001-4.0-amd64.tar.gz.
5. Put the updates online, in the Updates/ directory. For example,
this would be a valid URL to an update package:
<http://www.NetBSD.org/~elad/haze/Updates/NetBSD-UP2006-0001-3.0-amd64.tgz>
Update consumer: (people who apply updates)
1. Make sure there's a /etc/haze directory and that it's writable by
the user Haze is running as. I would elaborate on this too much,
but you *should* be able to tell Haze to perform updating on a
specified root directory, and then do the sync yourself, if you
don't trust running Haze as root. If you do:
mkdir /etc/haze
Everything else, including the configuration file and local
database, will be created by Haze automagically. You can inspect
the default values used in haze.h.
2. By default, things should pretty much Just Work. Therefore, here
are some usage examples:
Show the batch of updates waiting to be installed:
haze -B
Install all pending updates:
haze
Show locally known updates (including installed, ignored, and not applicable updates):
haze -L
Install a specific update:
haze -U NetBSD-UP2006-0001
Rollback an update:
haze -R -U NetBSD-UP2006-0001
View details about an update:
haze -V -U NetBSD-UP2006-0001
Explicitly ignore an update:
haze -i -U NetBSD-UP2006-0001
Operate in dummy mode, and just print stuff you'd do:
haze -x ...
Generate a security advisory skeleton for an update:
haze -S -U NetBSD-UP2006-0001
List available command line options:
haze -h
See also
Sometimes it is necessary to be more quiet, when using the shell. For example, when working in groups, or if you're just annoyed of the beep.
The console beep can be effectively disabled by disabling the pcppi driver in the kernel. This is done during kernel configuration by finding and commenting out the lines referring to the pcppi(4) driver and its attached child devices in the kernel configuration file, e.g. "sysbeep at pcppi" on i386. See the NetBSD Guide for information on compiling customised kernels.
In the event the user must use a precompiled kernel that has the sysbeep device enabled, the pcppi driver and attached devices can be interactively disabled at boot time, via userconf(1), as follows:
Select "Drop to boot prompt" at the boot menu.
Enter "boot -c" to begin interactive boot using userconf(1).
Disable the pcppi driver and its attached devices using the "disable" command. Type "?" or "help" at the "uc>" prompt for a list of the userconf(1) commands and their syntax. You may find the device numbers of "sysbeep at pcppi" and associated pcppi device drivers using the "list" command. Type "quit" when done disabling the pcppi devices.
These methods should work on any hardware.
Alternatively, there are several ways to disable the beep after the system has booted and the user has logged in, as described below.
On the console you can set the volume of the console beep with ?wsconsctl.
To display the current volume enter:
# wsconsctl bell.volume
You can control the volume setting different values. To turn the volume off enter:
# wsconsctl -w bell.volume=0
This may not work on every hardware. The solution is to set the the bell pitch to 0 Hz.
# wsconsctl -w bell.pitch=0
The bell pitch defaults to 1500 Hz. Sometimes a lower value is more comfortable. Just keep trying.
To apply these settings at startup, add them to /etc/wscons.conf. For example:
setvar wskbd bell.volume 0
setvar wskbd bell.pitch 0
X11
If you are working with X11, you can turn off the console beep using the command ?xset.
If you are using modular Xorg you need to install ?xset, it is under x11/xset.
To turn the console beep off enter:
$ xset b off
To turn it back on, enter:
$ xset b on
You can also set the volume and the pitch of the beep. This is not always supported, depending on the hardware.
However if you want to set volume and pitch you can do something like that:
$ xset b 10 1000 100
See also
Screen
One can take screenshots of a console session with screen.
Run the program in screen and use ^A-h to make a "hardcopy".
Introduction
The best way to share a partition with Linux is to create an EXT2 filesystem and use revision 0 (aka GOOD_OLD_REV) as the filesystem format. By using the GOOD_OLD_REV (revision 0) format you can avoid the dir_index and large_file issues encountered with DYNAMIC_REV (revision 1).
Example
Here is an example of how you can create, format and check an EXT2 filesystem using the Linux native tools:
# mkfs -t ext2 -r 0 /dev/hda3
# fsck -C -t ext2 /dev/hda3
# tune2fs -c 20 -i 6m -L Share /dev/hda3
# fsck -C -t ext2 /dev/hda3
# tune2fs -l /dev/hda3
If you want to use the revision 1 ext2 fs format it is possible but you need to format it like this:
# mke2fs -I 128 -L "Choose a name here" -O ^dir_index,^has_journal -v /dev/partition
(you can get mke2fs from the e2fsprogs package).
Partitions formatted this way are compatible with the windows and OS X ext2 drivers:
Contents
Introduction
The Common Unix Printing System (CUPS) is a modular printing system for Unix-like computer operating systems that allows a computer to act as a powerful print server. A computer running CUPS is a host which can accept print jobs from client computers, process them, and send them to the appropriate printer. More information regarding the architecture of CUPS can be found here.
To check whether your printer works with CUPS and to what extent, you may visit the OpenPrinting database.
Install CUPS
Assuming your pkgsrc tree lies at /usr/pkgsrc
, type:
# cd /usr/pkgsrc/print/cups
# make install
Modify rc.conf
Next copy cupsd startup script at /etc/rc.d:
# cp /usr/pkg/share/examples/rc.d/cupsd /etc/rc.d
Then add the following line to your /etc/rc.conf file:
cupsd=YES
And start the CUPS daemon, with:
# /etc/rc.d/cupsd start
Starting cupsd.
#
Install Foomatic PPD collection
The Foomatic PPD collection includes suitable PPDs for printers listed in the Foomatic printer/driver database. Together with the foomatic-filters package, this collection of PPDs allows many non-PostScript printers to function as if they were PostScript printers:
cd /usr/pkgsrc/print/foomatic-ppds
# make install
The following package adds Foomatic PPDs to the CUPS PPD database:
# cd /usr/pkgsrc/print/foomatic-ppds-cups
# make install
Setup CUPS
Open a web browser and type in the address/location bar:
http://localhost:631
You will enter a web-based interface where you can manage your printing system (add printers, etc).
To print using cups from the command line without further modifications you have to use /usr/pkg/bin/lpr. Using simply lpr will not print using cups but the default NetBSD printing system. The same apply to the spool queue examination command lpq.
Network configuration
Server side
Client side
Documentation
Notes
Permissions
Just for the record (I'm not sure where this fits in): To make it print, I had to
chown lp /dev/ulpt0 # usb
and install print/hpijs for my Deskjet (it is used by gs via the foomatic thing), then set some additional options in the web interface, e.g. a4 paper size. Conclusion: The web interface sucks, looking into the logs is indispensable.
BSD lpr
I think I no longer need the BSD lpr, so i just did this to avoid confusion:
# chmod -x /usr/bin/lp*
# chmod -x /usr/sbin/lp*
HP Printer
To use HP printers (like DeskJet, OfficeJet, PhotoSmart, Business InkJet, and some LaserJet models) and to not have this kind of error /usr/pkg/libexec/cups/filter/foomatic-rip failed, you need to install another package named hpijs :
# cd /usr/pkgsrc/print/hpijs
# make install clean clean-depends
See also
Contents
Introduction
The NetBSD guide contains a chapter on printing. The basic parts of this printing system are the line printer spooler daemon known as lpd(8) and the associated tools such as lpr(1). Practically all printer-specific configuration is done with a special printcap(5) database. This guide will not replicate the information presented in the official guide, but instead tries to offer some examples on how this printing system can be configured with modern printers.
Alternatives
It appears that the Common Unix Printing System (CUPS) has gradually replaced other more traditional printing systems (see also How to setup CUPS in NetBSD). However, there is still room for lpd(8) and friends. If not for anything else, CUPS is relatively complex piece of software with quite bad security record. Some prefer also to always use tools that come with an operating system instead of relying on external packages.
Configuring and using a printer with lpd(8) can be easier than with CUPS, once you wrap your head around it.
Example: HP DeskJet
The following steps were needed for a low-cost HP DeskJet printer. First two packages were installed: print/hpijs and print/foomatic-filters:
cd /usr/pkgsrc/print/hpijs
make install package clean
cd ../foomatic-filters
make install package clean
The former is essential for high-quality output with HP printers and the latter is convenient. The print/hpijs-package contains various .ppd-files compressed with bzip(1). You need to pick one suitable for your printer and unpack it to a desired location. An example:
mkdir /etc/print
cd /etc/print
cp /usr/pkg/share/ppd/HP-DeskJet_5550-hpijs.ppd.gz .
gunzip *.gz
The next step is to configure the printer using the /etc/printcap file. An example that uses parallel port:
# This requires the following packages:
# print/hpijs and print/foomatic-filters.
#
lp|hp|HP Deskjet 5550:\
:lp=/dev/lpt0:\
:af=/etc/print/HP-DeskJet_5550-hpijs.ppd:\
:if=/usr/pkg/bin/foomatic-rip:\
:sd=/var/spool/output/lpd:\
:lf=/var/log/lpd-errs:\
:rg=print:\
:rs=true:\
:sh:
This will use the mentioned filter. The curiously named foomatic Perl script takes PostScript as standard input and generates the printer's page description language as standard output. This is pretty much all that is needed. For additional parameters in the configuration file, such as limiting the access to a group named print, please refer to the manual page, printcap(5).
The final step is to enable the lpd(8) daemon. The usual conventions with rc.conf(5) apply:
lpd=YES
lpd_flags="-s"
The -s flag tells the daemon to use UNIX domain socket instead of listening on all interfaces. This is probably a good idea if you are not configuring a printing server (in which case you probably already know what you are doing).
After starting the daemon you should be able to print files from the command line using the lpr(1). Printing from GUIs should also work. At least print/xpdf and www/firefox3 were both capable of printing.
Example: HP LaserJet & network
Before start you have to enable printer daemon in rc.conf file, set:
lpd=YES
To enable printing for HP Laser Jet network printers add into /etc/printcap following:
# HP-4250
HP-4250:\
:lp=:sh:sd=/var/spool/lpd/lp:\
:rm=192.168.170.193:\
:lf=/var/log/lpd-errs:mx#0:
Where: HP-4250 is basic name of your printer which you can change per desire. Instead of 192.168.170.193 you should set IP address of your printer.
Check and create directory path if needed.
See also
Contents
What is CVS
CVS is an abbreviation for Concurrent Versions System.
Requirements
Nothing. CVS comes with the NetBSD base installation.
Setting up the repository
First decide where you want to store all cvs repositories. Let's take /usr/cvsrepositories
# mkdir /usr/cvsrepositories
Now you can create directories for different projects.
# cd /usr/cvsrepositories
# mkdir mycompany
# mkdir myprivatestuff
This projects must have chmod 770 to separate them from each other.
# chmod 770 mycompany
# chmod 770 myprivatestuff
Creating user groups
You should create a group for projects, where people are working together.
# group add mycompanyname
You should now assign this group to the project directory they belong.
# cd /usr/cvsrepositories
# chgrp mycompanyname mycompany/
cvs init
Before you can either checkout or import anything, you have to init your projects root directory. To keep the path short for the CVSROOT environment variable, I recommend using symlinks to the repository from the root /.
# cd /
# ln -s /usr/cvsrepositories/mycompany mycompany
Now create the cvs repository using
# cvs -d /mycompany/ init
Creating users
Now create users that are allowed to check out from your repository. Keep company workers in the group you have created before.
# useradd -G mycompanyname -m john
And set a password for the user john
# passwd john
It's your decision if you want to grant the users shell access or not.
Setting environment variables
Please set the environment variables CVSROOT and CVS_RSH. Bash or ksh users please use export.
# export CVSROOT=username@yourserver.com:/mycompany
# export CVS_RSH=ssh
csh users set it like this:
# setenv CVSROOT username@yourserver.com:/mycompany
# setenv CVS_RSH ssh
As you can see we use ssh as the transport protocol. This is recommended. Keep your transfers encrypted.
Initial check in
You should now proceed with the the initial check in of your (code)work. You can do this from another server aswell. Don't forget to set the environment variables there too.
Now please change into the directory you wish to import initially.
# cd myproject
and import it to your cvs server.
# cvs import -m "myproject initial import" myproject myproject initial
this should produce an output like this:
N myproject/test.c
N myproject/test.h
N myproject/mi/mi.h
N myproject/md/md.h
No conflicts created by this import
checkout
To checkout the work, set your environment variables and enter
# cvs co -PA myproject
This will checkout myproject.
ssh config
Please configure your ssh client to fit your need in .ssh/config, like using a different ssh port than 22.
That's it. For more information about cvs(1) and using it please read the manpage.
See also
Besides setting up the global system timezone by symlinking /etc/localtime
to a file in /usr/share/zoneinfo
, you can also set a timezone that applies
only for one user. This is done by setting the environment variable TZ
.
You can set it in your startup file like this:
$ echo 'export TZ=Europe/Amsterdam' >> ~/.profile
From this shell all subsequent [date] calls will use the
/usr/share/zoneinfo/Europe/Amsterdam
file for translating the system's UTC
time to your local time.
To run a single process with a specific timezone, try something like this:
$ env TZ=Canada/Eastern xclock -d -strftime "Toronto: %a, %d %b, %H:%M" &
This will start an environment with the TZ variable set to Canada/Eastern, and run a digital (-d) xclock with the time formatted as instructed by -strfime, including putting a note about which timezone it belongs to ("Toronto"). This process will detach from the terminal (because of the &), but leave the environment you ran it from with the same timezone it began with. With a setup like this, one can run an xclock (or many xclocks) displaying the localtime of various timezones around the world.
References
Contents
Introduction
This little article will try to make sense of the jungle that is NFS and NIS. For our example we will use NFS for keeping /home on a server, allowing us to work on the same files in our homedir from any computer in the network.
NIS
NIS (Network Information Service) is a directory system which is used to centralise configuration files like /etc/hosts and /etc/passwd. By using NIS for passwd, you can have the same users on each host in the network without the hassle of keeping the passwd file of all hosts synchronised.
We will need NIS (or another directory service) to make sure the NFS user ids/group ids are the same on the server as on all clients. Otherwise, bad things will happen, as you can probably imagine (especially in our example of mounting /home over NFS). Note that using NIS with NFS is not mandatory, you can also keep the server and client's passwd in synch.
NIS used to be called the "Yellow Pages", or YP for short. Because of trademarks it had to be renamed, but the programs are all still prefixed with yp
.
Kernel options
Before doing anything with NFS, ensure that your kernel has support for NFS sharing. This means your clients and servers must have NFS kernel support enabled. This is the case for GENERIC Kernels. For custom Kernels, the following lines must be in the kernel file:
file-system NFS # Network File System client
Your server also must have the following option:
options NFSSERVER # Network File System server
If you want to get funky and boot from NFS (not discussed in this article), your clients need these options as well:
options NFS_BOOT_DHCP,NFS_BOOT_BOOTPARAM
Creating a NIS setup
The first thing we should do is decide on a NIS domain name. This has nothing to do with your machine's Internet domain name. It is just a unique name that is used to identify machines in the same NIS block.
The domainname is set (as root) using the domainname(1) program, or can be set in the /etc/mydomain file.
Alternatively, in most BSD systems, it can be set in /etc/rc.conf under the variable domainname
.
root@earth# domainname planets
After this, we must initialise all files needed for the server to do its work. For this, we use the ypinit utility.
root@earth# ypinit -m
The -m means we are creating a master server. On more complex networks, you can even want slave servers. The tool will ask you for a list of YP servers to bind to.
Since we're only using one server, just press RETURN (make sure your own server's internal address is in the list).
Before we run make
in /var/yp, as the tool says, we must enable the NIS daemons: rpcbind, ypserv and ypbind (in that order). After that, we can run make
in /var/yp.
To test if your setup is working, try yptest. It should spew out the passwd file among others, so don't panic
To get stuff working on your client, you need to enable the yppasswdd, rpcbind and ypbind daemons as well. In order to do that, edit the /etc/rc.conf file and add there following:
#NIS server
ypserv="YES"
ypbind="YES"
yppasswdd="YES"
rpcbind="YES"
Then just run
# /etc/rc.d/rpcbind start
# /etc/rc.d/ypserv start
# /etc/rc.d/ypbind start
# /etc/rc.d/yppasswdd start
rpc.yppasswdd(8) must be running on the NIS master server to allow users to change information in the password file.
ypserv(8) provides information from NIS maps to the NIS clients on the network.
ypbind(8) finds the server for a particular NIS domain and stores information about it in a "binding file".
After that, you can use ypinit:
root@mars# ypinit -c
Then, add your NIS server's address to the list. To test if everything is working, use yptest on the client as well. Note that ypbind will HANG if it can't find the server!
If everything is working, you are ready to go! Just edit /etc/nsswitch.conf and put in some nis
keywords. For example:
passwd: files nis
would first look up usernames/passwords/uids in /etc/passwd, and if it can't find it, it would look it up using NIS. Right after changing this file, you should be able to log in on your system using a username which is only in /etc/passwd on the server. That's all there is to it.
The daemons
What are all those daemons for? Well, here's a quick rundown:
Portmap/rpcbind is the program which maps RPC (Remote Procedure Call) program numbers to port numbers (hence, portmapper). Any program which wishes to know on what port a certain RPC program is listening can ask this from the portmapper daemon (rpcbind). Each RPC service has its own number, which can be looked up in /etc/rpc. These numbers are how rpcbind can match the running RPC services to the ports. In short: If rpcbind is not running, not a single RPC program will work.
Ypserv is an authentication daemon for the RPC services, I believe. Ypbind is the daemon which can find the YP server for the specified domain.
NFS
Setting up NFS is a piece of cake. Just enter all directories you wish to export in /etc/exports and start the NFS daemon. In our example we would have:
/home -network 192.168.0.0 -mask 255.255.0.0 -maproot=root
This exports /home only on the LAN 192.168.x.x. The maproot line is needed, because otherwise the client's root will not have superuser access. Now, start the mount daemon and the NFS daemons (mountd and nfsd) as root on your server, in that order. For that type:
root@mars# /etc/rc.d/rpcbind onestart
root@mars# /etc/rc.d/mountd onestart
root@mars# /etc/rc.d/nfsd onestart
root@mars# /etc/rc.d/nfslocking onestart
If you wish to start the NFS server on boot, add following lines to your /etc/rc.conf
nfs_server=yes
rpcbind=yes
mountd=${nfs_server}
lockd=${nfs_server}
statd=${nfs_server}
Now, try to mount from the client and type:
root@mars # mount -t nfs earth:/home /home
Voila, you're done. Just add all NFS volumes you want to mount to your /etc/fstab like this
earth:/home /home nfs rw
and have them mounted at system startup.
NOTE: I had much trouble with NFS which was caused by UDP packet fragmentation. This made all writes extremely slow (and other outgoing network traffic as well!) while reads were at an acceptable speed. To solve this, I added the (undocumented?) tcp
option to fstab to mount NFS over TCP. You'll probably also need to add
nfsd_flags='-t'
to rc.conf so the NFS server serves up TCP exports.
If you just want to run NFS, you need to run the following daemons on your server: rpcbind, mountd, nfsd (in that order)
Notes
Concerning NFS
If you find NFS is not suitable for you, you could try Coda. The Coda filesystem tries to overcome some of the drawbacks of NFS:
- Handling of (sudden) disconnections
- Its own authentication system
And some others. The latest NFS versions are of course trying to integrate some of Coda's features as well.
Concerning NIS
A disadvantage of NIS is that it is not very secure. If security is a big concern, have a look at LDAP and NIS+, which are more complex directory services. For networks where security isn't that important (like most home networks), NIS will do. It is also much easier to set up than NIS+ or LDAP.
On NetBSD (probably on other systems as well), the NIS server consults /etc/hosts.allow and /etc/hosts.deny (from Wietse Venema's tcpwrappers package) to determine if the requesting host is allowed to access the NIS directory. This can help you in securing NIS a little.
My /etc/hosts.deny looks like this:
ypserv: ALL
rpcbind: ALL
ypbind: ALL
nfsd: ALL
In my /etc/hosts.allow I have my LAN hosts.
References
- O'Reilly's Managing NFS and NIS, 2nd Edition
- Linux NFS howto
- The CODA project
- The good old NetBSD Guide
- Replacing NIS with Kerberos and LDAP howto
See also
- mount_nfs(8) manpage
Introduction
SWAT - the Samba Web Administration Tool provides a really quick and easy way to set up a Samba server, with more powerful configuration options available to those who need them. It's already part of the Samba package that is available from pkgsrc, and you don't even need to install and configure an HTTP server like Apache to use it.
In this tutorial I will go through the steps I took to set up a share on my NetBSD machine, so that the machine's hard drive could be used for storage by the Windows PC's on the network. The 'guest' account is used for access and I won't go into how to set up users and passwords, so this solution would probably be more suitable for home networks.
Install Samba and enable SWAT
The first step is to fetch, build and install Samba:
# cd /usr/pkgsrc/net/samba
# make install clean
Next, put scripts in /etc/rc.d so that smbd and nmbd will be started automatically when NetBSD boots up. I simply used the example scripts that came with NetBSD.
# cp /usr/pkg/share/examples/rc.d/smbd /etc/rc.d/
# cp /usr/pkg/share/examples/rc.d/nmbd /etc/rc.d/
You also need to add the following lines to /etc/rc.conf
smbd=YES
nmbd=YES
SWAT can be enabled by adding the following line to /etc/inetd.conf
swat stream tcp nowait.400 root /usr/pkg/sbin/swat swat
Now, restart inetd to enable SWAT
# /etc/rc.d/inetd restart
Use SWAT to configure the Samba server
You should now be able to access SWAT by surfing to http://:901/ where is the IP for your NetBSD machine (or localhost if you are accessing SWAT locally). Login as 'root' with your system's root password. You will be taken to SWAT's main menu.
First, click on the 'Globals' icon, and use that menu to configure global options such as the 'workgroup' your Samba server is to be part of. If you don't understand what an option does, click the 'help' link next to it. Use the 'commit changes' button to save your work.
Next, click on the 'Shares' icon. To create a share, type a name into the text box and click 'create share'. You will now be able to specify the path to the folder that you want to share. To make the share accessible to anyone on the network without a password, change 'guest ok' to 'yes' using the drop-down menu. Change 'read only' to 'no' if you want other machines on the network to have read/write access to your shared folder.
Finally click on the 'Status' icon. From here you can start/stop/restart the Samba server without having to issue any commands at the command line. Just click 'start smbd' and 'start nmbd'.
Your shared folder should now be accessible by the other machines on the network...To check this out, use 'Network Neighbourhood' in Windows or KDE users can surf to smb:/ in Konqueror...
Setting up a samba server on your NetBSD box for WindowsXP clients is really simple.
Install samba via pkgsrc:
# cd /usr/pkgsrc/net/samba
# make install clean
Start the services via /etc/inetd.conf then uncomment the next two lines.
#netbios-ssn stream tcp nowait root /usr/pkg/sbin/smbd
#netbios-ns dgram udp wait root /usr/pkg/sbin/nmbd
Change it to this:
netbios-ssn stream tcp nowait root /usr/pkg/sbin/smbd
netbios-ns dgram udp wait root /usr/pkg/sbin/nmbd
Save the changes and restart inetd:
/etc/rc.d/inetd restart
Now add the following lines to /etc/rc.conf:
smbd=YES
nmbd=YES
samba=YES
You have to create a /usr/pkg/etc/samba/smb.conf with the following basic configuration:
workgroup="some_group"
server string="NetBSD Samba Server"
hosts allow = 192.168.1. , 192.168.0.
encrypt passwords = yes
[shared]
comment = Shared
path = /home/ficovh/mp3
browseable = yes
writable = no
valid users = samba
Add a valid user to the NetBSD system:
# useradd samba
Add a windows user to samba and set the password:
# smbpasswd -a -U samba
Now test the server with your Windows machine.
You can also browse the content from a windows machine with NetBSD smbclient:
# smbclient //ip_windows/shared_name_resource
ip_windows is the IP for the windows machine and shared_name_resource is the directory shared.
You can also test if your local samba server is working.
# smbclient -Usamba -L localhost
Thats it, a basic samba server on your NetBSD box.
See also
Contents
Meta
Note that there is also a Xen HOWTO. Arguably this content could be folded in there.
Requirements
Xen3 is supported from NetBSD-4.0 onward. If you plan on using NetBSD-CURRENT, please read the article ?How to build NetBSD-current to do so. Guest operating systems can run from their own partitions, or from image files in the main (DOM0) install.
This tutorial describes how to:
- Install and configure NetBSD as a DOM0
- Install and run a NetBSD as a DOMU
- Install and run a Windows XP system as as DOMU
- Install and run a Debian system as a DOMU
Installing Xen tools and kernels
Xen tools
To run and administer xen domains, we need the xentools3 or xentools33 packages which are available in pkgsrc.
Xen 3.1 packages are under sysutils/xentools3 for traditional xentools, and sysutils/xentools3-hvm for the additional HVM support to run un- modified OSes such as Windows XP.
Xen 3.3 packages are under sysutils/xentools33. Unlike Xen 3.1, no extra package is required for HVM support. Note, it is not possible to install Xen 3.1 and Xen 3.3 packages at the same time. They conflict with each other.
HVM stands for Hardware Virtualization Managed. The benefit of hardware virtualization is that you can run OSes that don't know they are being virutalized like Windows XP, for example. However, you must have a CPU which supports this. Intel CPUs must have the 'VT' instruction. AMD CPUs will have the 'SVM' instruction. You can find out if your CPU supports HVM by taking a look at this page:
http://wiki.xensource.com/xenwiki/HVM_Compatible_Processors
In NetBSD 5.0 there's a new cpuctl command. This is an example output of an AMD CPU:
# cpuctl identify 0
cpu0: AMD Unknown K8 (Athlon) (686-class), 2210.22 MHz, id 0x60f82
cpu0: features 0x178bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR>
cpu0: features 0x178bfbff<PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX>
cpu0: features 0x178bfbff<FXSR,SSE,SSE2,HTT>
cpu0: features2 0x2001<SSE3,CX16>
cpu0: features3 0xebd3fbff<SCALL/RET,NOX,MXX,FFXSR,RDTSCP,LONG,3DNOW2,3DNOW>
cpu0: features4 0x11f<LAHF,CMPLEGACY,SVM,EAPIC,ALTMOVCR0,3DNOWPREFETCH>
cpu0: "AMD Turion(tm) 64 X2 Mobile Technology TL-64"
cpu0: I-cache 64KB 64B/line 2-way, D-cache 64KB 64B/line 2-waycpu0: L2 cache 1MB 64B/line 16-way
cpu0: ITLB 32 4KB entries fully associative, 8 4MB entries fully associative
cpu0: DTLB 32 4KB entries fully associative, 8 4MB entries fully associativecpu0: Initial APIC ID 0
cpu0: AMD Power Management features: 0x7f<TS,FID,VID,TTP,HTC,STC,100>
cpu0: family 0f model 08 extfamily 00 extmodel 06
Note the SVM feature flag in the features4 line indicating that HVM support is present on this CPU. However, this feature may be disabled in the BIOS. In this case since NetBSD 5.0 there will show up this dmesg line on AMD CPUs only:
cpu0: SVM disabled by the BIOS
Xen 3.1 (the xentools3-hvm package will automatically bring in the base xentools3):
# cd /usr/pkgsrc/sysutils/xentools3-hvm
# make install
Xen 3.3:
# cd /usr/pkgsrc/sysutils/xentools33
# make install
Xen kernel
Next, we will get the xen hypervisor kernel itself. For NetBSD 4.x and 5.x the i386 port does not support 'PAE' kernels and must run the Xen 3.1 package. This restriction has been removed in -current and is not relevant for the amd64 port.
For Xen 3.1, in pkgsrc this is sysutils/xenkernel3, for Xen 3.3, this is sysutils/xenkernel33:
# cd /usr/pkgsrc/sysutils/xenkernel3
# make install
And copy it into / directory, like this:
# cp /usr/pkg/xen3-kernel/xen.gz /
Xen DOM0 kernel
Lastly, we need a XEN-enabled kernel for our DOM0 domain. There are two possibilities: downloading the binary version, or building it from source.
Downloading the binary version
From NetBSD-4.0 onward, NetBSD supports Xen, and provides some XEN-enabled
kernel directly from ?1[36], in each binary/kernel
directory associated
with a particular release.
For example, with NetBSD-4.0, You can grab one from (/netbsd-XEN3_DOM0.gz):
# ftp -a ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-4.0/i386/binary/kernel/netbsd-XEN3_DOM0.gz
The netbsd-XEN3_DOM0.gz
file contains a gzipped version of the kernel. Just
copy and move it into the root directory like this:
# cp netbsd-XEN3_DOM0.gz /
Building it from source
Building a kernel from source is out of the scope of this section. Please consult How to build a kernel from NetBSD's FAQ for more details.
Once building is done, you can find them in
/usr/src/obj/_releasedir_/i386/binary/kernel/
. Identically to binary
version, copy or move netbsd-XEN3_DOM0 in the root directory /
.
Selecting a bootloader
In NetBSD 5.0 the native boot loader, /boot, can load Xen directly. The NetBSD 5.0 bootloader can be easily dropped into a NetBSD 4.x system by coping them into /boot and running installboot(8) properly.
Updating /boot
For full details refer to installboot(8), but for a standard configuration with VGA console and an IDE or SATA drive with an FFSv1 root file system use the following:
# cp /usr/mdec/boot /boot
# installboot -v -o timeout=5 /dev/rwd0a /usr/mdec/bootxx_ffsv1
Updating /boot.cfg
NetBSD 5.0 or later will already have a /boot.cfg file with a basic configuration. Enabling Xen support only requires one additional line in this case. If you're upgrading from an earlier version or do not have an existing /boot.cfg use the following example:
banner=Welcome to NetBSD
banner==================
banner=
banner=Please choose an option from the following menu:
menu=Boot normally:boot netbsd
menu=Boot single-user:boot netbsd -s
menu=Boot backup kernel:boot onetbsd
menu=Drop to boot prompt:prompt
menu=Boot Xen with 256MB for dom0:load /netbsd-XEN3_DOM0 console=pc;multiboot /usr/pkg/xen3-kernel/xen.gz dom0_mem=256M
menu=Boot Xen with 256MB for dom0 (serial):load /netbsd-XEN3_DOM0 console=com0;multiboot /usr/pkg/xen3-kernel/xen.gz dom0_mem=256M console=com1 com1=115200,8n1
menu=Boot Xen with dom0 in single-user mode:load /netbsd-XEN3_DOM0 -s;multiboot /usr/pkg/xen3-kernel/xen.gz dom0_mem=256M
timeout=5
default=1
Make sure you update the "load /netbsd-XEN3_DOM0" and the "dom0_mem=256M" arguments to match your setup. On next boot select the 'Boot Xen with 256MB for dom0' option and make sure you see Xen kernel messages and the normal NetBSD kernel messages. Once you're satisfied it is working you can change the "default=1" line to "default=5" to automatically boot Xen on reboot.
Setting up DOM0
Creating xen devices
To create all xen devices, change to /dev and run ./MAKEDEV xen
cd /dev
./MAKEDEV xen
This should create the devices xencons, xenevt and xsd_kva. If any of these are missing you may not have updated to CURRENT using the latest sources and you will have to create the missing device files.
Configuring the bridge interface
The bridge(4) interface is used to provide network access to DOMUs.
To use one, edit (or create) the file /etc/ifconfig.bridge0
and insert
following lines to:
create
!brconfig $int add bge0 stp bge0 up
Where 'bge0' should be changed to the name of the interface you want to use with your guest operating systems. use ifconfig(8) to get more details about your actual interfaces.
Rebooting into DOM0
Time to reboot:
# shutdown -r now
If all has gone well, you should have booted into the XEN3_DOM0 kernel. Check this with uname(1):
# uname -v
NetBSD 4.0 (XEN3_DOM0) #0: Sun Dec 16 01:20:31 PST 2007
builds@wb34:/home/builds/ab/netbsd-4-0-RELEASE/i386/200712160005Z-obj/home/builds/ab/netbsd-4-0-RELEASE/src/sys/arch/i386/compile/XEN3_DOM0
You should have XEN3 DOM0 mentioned somewhere.
Configuring rc scripts
Copy or symlink xend, xenbackendd and xendomains from
/usr/pkg/share/examples/rc.d
to /etc/rc.d
.
# cp /usr/pkg/share/examples/rc.d/xend /etc/rc.d/
# cp /usr/pkg/share/examples/rc.d/xenbackendd /etc/rc.d/
# cp /usr/pkg/share/examples/rc.d/xendomains /etc/rc.d/
Edit /etc/rc.conf
and add the following lines:
xend=YES
xenbackendd=YES
xendomains="dom1"
Later on, when you have created a configuration file for 'dom1', the xendomains variable specified above will trigger 'dom1' to be started when the system is booted. At this point no configuration exists for dom1, therefore it does nothing at this point. If you choose to name your configuration file something else, adapt the name accordingly.
To avoid rebooting a second, start all three services:
# /etc/rc.d/xend start
# /etc/rc.d/xenbackendd start
# /etc/rc.d/xendomains start
Run ifconfig -a
to ensure the bridge interface is present and issue a ps ax
| grep xen
to ensure you have a similar output.
12 ? DK 0:00.00 [xenwatch]
13 ? DK 0:00.00 [xenbus]
411 ? I 0:00.24 xenstored --pid-file=/var/run/xenstore.pid
594 ? IWa 0:00.26 xenconsoled
629 ? IW 0:00.00 /usr/pkg/bin/python2.3 /usr/pkg/sbin/xend start
631 ? IWsa 0:00.02 /usr/pkg/sbin/xenbackendd
639 ? IWa 0:00.52 /usr/pkg/bin/python2.3 /usr/pkg/sbin/xend start
The DOM0 configuration is now done. We will proceed to configuring DOMU domains.
Configuring DOMU
Configuring and installing a NetBSD DOMU
Create (or modify) /usr/pkg/etc/xen/dom1
and include this:
kernel = "/usr/src/obj/releasedir/i386/binary/kernel/netbsd-INSTALL_XEN3_DOMU.gz"
#kernel = "/netbsd-XEN3_DOMU"
memory = 64
name = "dom1"
#vcpus = 1
disk = [ 'phy:/dev/wd0g,0x03,w','file:/usr/src/obj/releasedir/i386/installation/cdrom/netbsd-i386.iso,0x04,r' ]
vif = [ 'bridge=bridge0' ]
root = "/dev/wd0d"
This configuration boots into the NetBSD sysinst program and allows you to install a NetBSD DOMU using the normal sysinst method. This configuration uses a DOMU_INSTALL kernel and an ISO image provided by a successful 'build release' and 'build iso-image'. You may be able to locate a valid Xen3 DOMU_INSTALL kernel from ftp://ftp.netbsd.org/pub/NetBSD-daily/ but if not, building a release is your best bet.
In this configuration file, /dev/wd0g
is the reserved partition for the
guest operating system. This should be changed to the partition you reserved
prior to following the instructions within this document.
If you would like to use a physical CDROM instead of an ISO image, change the disk line to:
disk = [ 'phy:/dev/wd0g,0x03,w','phy:/dev/cd0a,0x04,r' ]
Now boot into sysinst using the command:
xm create dom1 -c
The reserved partition will appear as /dev/xbd0
. Proceed as you would with a
normal NetBSD installation using xbd0 as the target drive and xbd1 as the
CDROM.
When you have finished, run shutdown -hp now
to dom1.
Now edit /usr/pkg/etc/xen/dom1
. Comment the INSTALL kernel and uncomment the
DOMU kernel.
You should now have a working NetBSD DOMU (dom1). Boot into dom1 again with the command:
xm create dom1 -c
and ensure the file, /etc/ttys
contains only this line or has all other
lines commented:
console "/usr/libexec/getty Pc" vt100 on secure
and the file /etc/wscons.conf
is completely empty or has all lines commented
out. These last two steps ensure no errors should be present on boot. Setting
wscons=NO in /etc/rc.conf
may effectively do the same thing.
From here, configure /etc/rc.conf
and all runtime configuration files as you
would normally. The network interface name should be xennet0. Use this
name when configuring an IP address.
More information can be obtained by referencing the Xen user guide and the official NetBSD Xen Howto. Questions can be addressed to the port- xen@NetBSD.org mailling list.
Configuring and installing a Windows XP DOMU
This requires an HVM capable processor and xentools (see sections above).
This assumes you have a copy of the Windows install CD in /home/xen/winxp.iso, and wish to create a file /home/xen/winxp.img to hold the install. First create a blank file to hold the install. This assumes a size of 4GB (4096M). If you want a different size adjust the numbers to match:
# dd if=/dev/zero of=/home/xen/winxp.img bs=1m count=4096
Create /usr/pkg/etc/xen/win01
:
kernel = '/usr/pkg/lib/xen/boot/hvmloader'
builder = 'hvm'
memory = '400'
device_model='/usr/pkg/libexec/qemu-dm'
disk = [ 'file:/home/xen/winxp.img,ioemu:hda,w',
'file:/home/xen/winxp.iso,ioemu:hdb:cdrom,r', ]
# Hostname
name = "win01"
vif = [ 'type=ioemu, bridge=bridge0' ]
boot= 'd'
vnc = 1
usbdevice = 'tablet' # Helps with mouse pointer positioning
You may want to modify the amount of memory and pathnames.
Ensure you have a vncviewer installed, such as net/tightvncviewer or net/vncviewer from pkgsrc.
Then start the XENU and connect to it via VNC.
# xm create /usr/pkg/etc/xen/win01
# vncviewer :0
This will boot the Windows ISO image and let you install Windows as normal. As Windows reboots during install you may need to restart vncviewer.
After install change the boot d to boot c to have the system boot directly from the disk image.
Configuring and installing a GNU/Linux DOMU
We will do this in two steps:
- install a GNU/Linux system, from a livecd or any installation media
- configure the DOM0 so that it can create and start the Linux DOMU.
Installing a Linux distribution (soon-to-be DOMU)
Before proceeding with DOMU configuration, we will install our favorite GNU/Linux distribution on the computer.
In order to do it, we need at least two partitions (only one, if you do not consider the swap). These partitions must reside outside of the NetBSD slice, and may be either of primary or extended type. Of course, you can use more than two, but adapt your labels and partitions accordingly.
We do not cover the partition/slices manipulations through fdisk(8) and disklabel(8), as it depends strongly on how you manage your hard drive's space.
For this tutorial, we will use this partitioning:
# fdisk /dev/wd0d
fdisk: removing corrupt bootsel information
fdisk: Cannot determine the number of heads
Disk: /dev/wd0d
NetBSD disklabel disk geometry:
cylinders: 486344, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
total sectors: 490234752
BIOS disk geometry:
cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
total sectors: 490234752
Partition table:
0: Linux native (sysid 131)
start 63, size 20482812 (10001 MB, Cyls 0-1274)
PBR is not bootable: All bytes are identical (0x00)
1: Linux swap or Prime or Solaris (sysid 130)
start 20482875, size 1959930 (957 MB, Cyls 1275-1396)
PBR is not bootable: All bytes are identical (0x00)
2: NetBSD (sysid 169)
start 61464690, size 428770062 (209360 MB, Cyls 3826-30515/178/63), Active
3: <UNUSED>
Drive serial number: -286527765 (0xeeebeeeb)
Here, you notice that we decide to use two primary partitions for our future Linux DOMU:
- partition 0 (for the root directory /)
- partition 1 (for the swap)
Labels:
16 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 30720816 61464690 4.2BSD 2048 16384 0 # (Cyl. 60976*- 91453*)
b: 1049328 92185506 swap # (Cyl. 91453*- 92494*)
c: 428770062 61464690 unused 0 0 # (Cyl. 60976*- 486343)
d: 490234752 0 unused 0 0 # (Cyl. 0 - 486343)
e: 20480000 93234834 4.2BSD 0 0 0 # (Cyl. 92494*- 112812*)
f: 20480000 113714834 4.2BSD 0 0 0 # (Cyl. 112812*- 133129*)
g: 20480000 134194834 4.2BSD 0 0 0 # (Cyl. 133129*- 153447*)
h: 335559918 154674834 4.2BSD 0 0 0 # (Cyl. 153447*- 486343)
i: 20482812 63 Linux Ext2 0 0 # (Cyl. 0*- 20320*)
j: 1959930 20482875 swap # (Cyl. 20320*- 22264*)
Bear in mind that we added two labels here, namely i and j, which maps respectively to partition 0 and partition 1 of the disk. We will use these labels later for DOMU configuration.
Now that we have partitioned the disk, proceed with installing your Linux distribution. We will not cover that part in this tutorial. You can either install it from an installation media (a cdrom from example), or copy files from an already installed distribution on your computer.
Tip: to manipulate ext2/3 filesystems (the traditional fs under Linux) from NetBSD, you can use sysutils/e2fsprogs from pkgsrc:
# cd /usr/pkgsrc/sysutils/e2fsprogs
# make install
And then use e2fsck, mke2fs and mount_ext2fs(8) directly from NetBSD.
Getting XEN aware Linux kernels
Once installation is done, reboot your computer and return to our Xen-NetBSD system.
To boot our Linux DOMU, we will need a Linux kernel supporting the XENU virtualisation. Depending on your Linux distribution, you can grab one from its repository (it is up to you to find it through aptitude, yum or whatever package manager you use), or get one from the Xen binary distribution.
To get a XENU Linux kernel from Xen binary distribution, get it directly from Xen website download page. Download the tarball and extract the vmlinuz-*-xen from it. In our case, with a 2.6.18 Linux kernel:
# ftp -a http://bits.xensource.com/oss-xen/release/3.1.0/bin.tgz/xen-3.1.0-install-x86_32.tgz
# cd /tmp
# tar -xzf xen-3.1.0-install-x86_32.tgz dist/install/boot/vmlinuz-2.6.18-xen
vmlinuz-2.6.18-xen is the kernel that Xen will use to start the DOMU. Move it to any directory you like (just remember it when configuring the kernel entry in the DOMU configuration file):
# mv dist/install/boot/vmlinuz-2.6.18-xen /vmlinuz-XEN3-DOMU
Configuring DOMU
Configuring the Linux DOMU is a bit different than a NetBSD one; some options tend to differ.
Edit (or create) the configuration file domu-linux, in
/usr/pkg/etc/xen/
:
# vi /usr/pkg/etc/xen/domu-linux
Here's a typical config file for a Linux DOMU:
#----------------------------------------------------------------------------
# Kernel image file. This kernel will be loaded in the new domain.
kernel = "/vmlinuz-XEN3-DOMU"
# Memory allocation (in megabytes) for the new domain.
memory = 256
# A handy name for your new domain. This will appear in 'xm list',
# and you can use this as parameters for xm in place of the domain
# number. All domains must have different names.
#
name = "domu-linux"
# Which CPU to start domain on (only relevant for SMP hardware). CPUs
# numbered starting from ``0''.
#
cpu = "^1" # leave to Xen to pick
#----------------------------------------------------------------------------
# Define network interfaces for the new domain.
# Number of network interfaces (must be at least 1). Default is 1.
vif = [ '' ]
# Define MAC and/or bridge for the network interfaces.
#
# The MAC address specified in ``mac'' is the one used for the interface
# in the new domain. The interface in domain0 will use this address XOR'd
# with 00:00:00:01:00:00 (i.e. aa:00:00:51:02:f0 in our example). Random
# MACs are assigned if not given.
#
# ``bridge'' is a required parameter, which will be passed to the
# vif-script called by xend(8) when a new domain is created to configure
# the new xvif interface in domain0.
#
# In this example, the xvif is added to bridge0, which should have been
# set up prior to the new domain being created -- either in the
# ``network'' script or using a /etc/ifconfig.bridge0 file.
#
vif = [ 'mac=aa:00:00:50:02:f0, bridge=bridge0' ]
#----------------------------------------------------------------------------
# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
#
# Each disk entry is of the form:
#
# phy:DEV,VDEV,MODE
#
# where DEV is the device, VDEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write. You can also create
# file-backed domains using disk entries of the form:
#
# file:PATH,VDEV,MODE
#
# where PATH is the path to the file used as the virtual disk, and VDEV
# and MODE have the same meaning as for ``phy'' devices.
#
# /dev/wd0i will be seen as "hda1" under DOMU (the root partition)
# /dev/wd0j will be seen as "hda2" under DOMU (the swap)
#
disk = [ 'phy:/dev/wd0i,hda1,w','phy:/dev/wd0j,hda2,w' ]
#----------------------------------------------------------------------------
# Set the kernel command line for the new domain.
# Set root device.
root = "/dev/hda1"
Now, you should be able to start your first Linux DOMU!
# xm create -c /usr/pkg/etc/xen/domu-linux
Possible caveats
If you intend to have more than one box configured with the above configuration on the same network, you will most likely have to specify a unique MAC address per guest OS, otherwise it is likely you will have a conflict. I'm not sure if the MAC assignment is random, incremental or if Xen is able to check for the existence of the proposed MAC address, so specifying the MAC address is recommended.
Here is a method to assign a MAC address to a newly created Xen Domu. First as described before, use the following vif parameter in your config file :
vif = [ 'bridge=bridge0' ]
Then, run the Xen DomU and, once logged run the following command :
# ifconfig xennet0
Output Sample :
xennet0: flags=8863<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST> mtu 1500
capabilities=2800<TCP4CSUM_Tx,UDP4CSUM_Tx>
enabled=0
address: 00:16:3e:2e:32:5f
inet 192.168.4.81 netmask 0xffffff00 broadcast 192.168.4.255
inet6 fe80::216:3eff:fe2e:325f%xennet0 prefixlen 64 scopeid 0x2
A MAC address is automaticaly generated, use it in your config file with the following syntax :
vif = [ 'mac=00:16:3e:2e:32:5f, bridge=bridge0' ]
And now you can restart the Xen DomU.
Please, note that the MAC Address MUST start with "00:16:3e".
See also
SMB aka CIFS (common internet file system) is a ubiquitous file sharing mechanism, but unfortunately it is very insecure. All files are sent clear over the line, and if you don't config password encryption, even passwords are sent as cleartext.
There is currently no built-in encryption or security in the CIFS protocol, nor is there any available as an extension to Samba, so we'll have to resort to external methods.
One of the nicer ways to secure Samba is by using stunnel. This is a little tool which listens on a port on the client machine and forwards all data sent to that port to another port/machine, encrypted via (Open)SSL.
Contents
Setting up the server
Configure samba
You set up the server just as you would normally, as described in How to set up a Samba Server.
If you wish to allow only secure traffic, you can let it listen on localhost with the following statement in smb.conf
:
# Only listen on loopback interface
socket address=127.0.0.1
Configure stunnel
You can install security/stunnel from ?pkgsrc. Then you can copy /usr/pkg/share/examples/stunnel/stunnel.conf-sample
and modify it to your needs. The following will be sufficient if you only need the bare minimum to get a secure samba setup:
Simple stunnel configuration for a secure samba setup
# OpenSSL certificate
cert = /usr/pkg/etc/stunnel/stunnel.pem
# Run chrooted as nobody
chroot = /var/run/stunnel
setuid = nobody
setgid = nobody
# This file is created after chrooting
pid = /stunnel
# Accept connections on port 800, on any interface
[smb]
accept = 0.0.0.0:800
# instead of port 139, port 445 will also work, unless you're using Mac OS X clients
connect = localhost:139
As you can see, you'll need an SSL certificate/key. This can be generated like this:
# openssl req -new -nodes -x509 -out stunnel/stunnel.pem -keyout /etc/stunnel/stunnel.pem
Run stunnel
Just add stunnel=yes
to your /etc/rc.conf
:
# echo "stunnel=yes" >> /etc/rc.conf
# /etc/rc.d/stunnel start
Warning: stunnel is very silent. Even if it gets an error it will just fail silently. Check with pgrep
if it's running.
Configuring your clients
Unix clients
On a Unix client you simply install and run security/stunnel as described above. You'll need to swap the port numbers and put it in client mode. ie, your stunnel.conf
should look like this:
client=yes; [smb] accept=localhost:139 connect=servername:800
This makes your client act as a samba server, to which you can connect. As soon as you connect to your machine, the data is encrypted and forwarded to servername
. You can run stunnel from rc.conf
just like on the server side.
Of course you can easily test it by connecting to localhost:
# smbclient -U yoda //localhost/myshare
Windows clients
Connecting a Windows client to samba over stunnel is a major hassle. Some background on why this is a problem is in order.
Apparently, when Windows is booted, the kernel binds a socket to port 445 on every real (this is important as we'll see later on) network interface. This means that no other process can ever bind this port. (try it, you'll get a "permission denied" message). This would mean we need to use another port for our fake "shared folder". Unfortunately, the Windows filemanager has no way to specify which port to use when you click "map network drive", so that's not an option.
Luckily for us, Windows has the following odd behaviour: When you click "map network drive" in the filemanager, it will first try to connect to port 445. When it finds no service listening there, it will try to fall back to port 139. Only when that has no service listening either, it will tell the user it couldn't connect. We will "abuse" this behaviour by tricking it into using this port.
Simply binding stunnel to port 139 is impossible, because of the Windows behaviour where it binds ports 139 and 445 on every interface, even if no actual files are being shared. It turns out that it doesn't do this on loopback network devices. To install one, follow this set of instructions:
- Open the "add hardware" wizard from the control panel.
- Wait for it to search in vain for new hardware.
- Tell it "yes, I've already connected my hardware" or the wizard will end...
- Pick "add a new device" from the bottom of the list.
- Don't let windows search for the hardware but choose it from a list ("Advanced").
- Pick the category "Network adapters".
- Choose "Microsoft loopback adapter".
When our new "hardware" is installed, you need to assign it an IP and disable NetBIOS activity on it:
- Open the "properties" dialog from the contextmenu in the "network connections" overview.
- Deselect all bindings except the TCP/IP ones. Typically you'll need to deselect "client for Microsoft networks" and "File and printer sharing".
- Select "TCP/IP", and then "settings" (or "properties")
- Choose any private network IP address you'll never see in any real network. (10.232.232.232 is a good example)
- Click "Advanced..."
- Choose the tab titled "WINS"
- Under "NetBIOS settings", click on "Disable NetBIOS over TCP/IP"
Finally, we can install stunnel (Windows binaries are available from ?1(http://www.stunnel.org). Put this in the stunnel.conf
file:
client=yes [smb] accept=10.232.232.232:139 connect=servername:800
It is advisable to install the stunnel service so it will start on system boot, which means it will be (semi-)transparent to the user.
To connect to the server, just open up the "map network drive" dialog and enter \\10.232.232.232\sharename
in the "computer name" box. To make this process a little more userfriendly, think up a hostname and stick it in \winnt\system32\drivers\etc\hosts
(windows NT/XP) or \windows\hosts
(Windows '9x). The format of this file is exactly like /etc/hosts
on Unix.
References
Contents
What TET is
The TETware family of tools are Test Execution Management Systems that takes care of the administration, sequencing, reporting and portability of all of the tests that you develop. Freeing up developers to concentrate on test development and helping testers by providing them with a single, standard, test harness. Enabling you to deliver your software projects on time and across multiple operating systems. They are all available off-the-virtual-shelf. Easily accessed by ftp download. So stop re-inventing the wheel, take the drudge out of test development and use TETware.
Reason of this article
There is an artcile describes how to build and run TET framework from scratch. The pkgsrc version has several restrictions:
- the recent version not available until now
- necessary to have the pkgsrc infrastructure
- only distributed version of TET is provided
However, you may apply patches from the pkgsrc tree (for example the patch-ad still can be applied for distributed TET).
Porting to NetBSD
Current opensourced version is 3.7a. To download or check new versions look to the Links section below. (The guide was tested at least on the NetBSD-current/i386 and NetBSD-current/ARMv6)
Patchset
The simply patch would be applied before compilation is started.
Prebuild
After applying patch you'll need to copy the proper makefile
cp src/defines/UNTESTED/freebsd.mk src/defines/netbsd.mk
Make the $TET_ROOT directory like /opt/tet3
- Copy content of TET sources tree to $TET_ROOT
Notice to get distributed TET:
- Create regular user tet on the target system with home path as $TET_ROOT without real shell
Build
- Change working directory to $TET_ROOT
Build framework
./configure -t cd src make make install
where means Lite (-t lite) or Distributed (-t inet) version of TET framework.
Notice to get distributed TET:
- Change owner of the $TET_ROOT to tet recursively
Update user's files
Put to your ~/.profile
TET
export TET_ROOT="/opt/tet3" export PATH=$PATH:$TET_ROOT/bin
System files correction (distributed TET)
Start tccd daemon from the /etc/rc.conf
export TET_ROOT="/opt/tet3" if test -x "$TET_ROOT/bin/tccd" then $TET_ROOT/bin/tccd && echo "TET3 tccd started" fi
Other files ($TET_ROOT/systems, $HOME/systems.equiv, /etc/services) are corrected in according to TET3 install guide.
Sample of required files
You'll need a minimal $TET_ROOT/systems file. The following content in the file is enough for this test suite.
000 localhost 60103
For the 'inet' version of TET, you'll need tccd running, and a $HOME/systems.equiv file in the directory of the user running tccd. The following contents will do for this file:
Note! In our case the $HOME is equivalent to /opt/tet3
localhost
And you'll need to append following lines to the /etc/services:
tcc 60103/tcp
tcc 60103/udp
Simple check
The directory contrib in the source tree contains a number of simplest examples of test cases. Let's start something from it step by step. Consider test cases under contrib/capi. Login as user tet. Be ensure you have set $TET_ROOT to actual path (/opt/tet3 in our case) and $TET_ROOT/bin is covered by $PATH.
$ echo $TET_ROOT
/opt/tet3
$ echo $PATH
/home/andy/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin:/usr/pkg/bin:/usr/pkg/sbin:/usr/games:/usr/local/bin:/usr/local/sbin:/opt/tet3/bin
Change current directory to $TET_ROOT/contrib/capi.
cd $TET_ROOT/contrib/capi
Patch test case build system (yeah, I know...):
--- tools/buildtool.c.orig 2008-11-24 13:15:58.000000000 +0000
+++ tools/buildtool.c 2008-11-24 13:16:15.000000000 +0000
@@ -51,7 +51,7 @@
#define BUILDSUCCESS 0 /* successful return from buildtool */
-static void tbuild()
+void tbuild()
{
char command_line[BUFLEN];
char bres_line[BUFSIZ];
To run the test suite, firstly apply profile setting, then run the Setup script and after use tcc.
source profile
sh ./Setup
tcc -p -bec contrib/capi
Links
- Free TET Framework
- Documentation
- VSTHlite - pthread test suite
Oracle Express on NetBSD
started by Emmanuel Kasper
Oracle Express is a free beer version of the Oracle Database. You are free to use it in production, and to redistribute it, but it has harcoded limits of use one CPU, 1 GB of RAM, and 4 GB for a table size.
Download the binary at http://www.oracle.com/technology/software/products/database/xe/index.html
Convert the rpm in cpio format
pkg_add -v rpm2cpio
rpm2cpio.pl oracle-xe-univ-10.2.0.1-1.0.i386.rpm > oracle-xe-univ-10.2.0.1-1.0.i386.cpio
Unpack the cpio archive (it installs in /usr/lib/oracle )
cpio -idv < oracle-xe-univ-10.2.0.1-1.0.i386.cpio
Install the Suse linux environnment. I don't need what exactly needed, so I prefer to install all.
pkg_add -v suse-9.1nb3
Add oracle libraries to ldconfig library cache
echo /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib >> /emul/linux/etc/ld.so.conf
/emul/linux/sbin/ldconfig
Test sqlplus binary
sqlplus /nolog
Right now we have a sqlplus binary that works and is abled to connect over the network to a database but creating a database fails with the error:
ERROR:
ORA-12549: TNS:operating system resource quota exceeded
I guess some kernel recompile is necessary. See http://n0se.shacknet.nu/
Contents
Introduction
Netbeans, along with Eclipse, is one of the most widely used Java IDE (Integrated Development Environment), also capable of Ruby and C/C++ development. The purpose of this document is to describe the steps needed to run Netbeans 6.0 on a NetBSD system, using Linux Java Virtual Machine and Linux compatibility mode.
Downloading Netbeans
The latest version of Netbeans may be downloaded from here. We will be using version 6. After having downloaded file netbeans-6.0.zip, we compare the SHA256 sums to ensure data integrity:
$ digest sha256 netbeans-6.0.zip
SHA256 (netbeans-6.0.zip) = fc80d6fd507c5bcc647db564fc99518542b9122d7ea6fcf90abb08156a26d549
XXX: This is from the NetBeans DVD, check if the downloadable version is the same.
which is equal to the one mentioned in the download page.
Next we extract the compressed archive:
$ unzip netbeans-6.0.zip
inflating (...)
$ ls -ld netbeans
drwxr-xr-x 11 user users 1024 Dec 23 16:55 netbeans/
Verifying Linux compatibility mode
Kernel support
Since we are going to install a Linux binary JVM, we must also enable Linux compatibility mode in our system. If you are using a GENERIC kernel, you are already done since the Linux compatibility layer is enabled by default. If not, you will need to compile your kernel with the following options:
options COMPAT_LINUX
In case you are unfamiliar with the process of building a custom kernel, please refer to NetBSD Documentation.
A quick way to check whether you are ok as far as kernel support is concerned, is to invoke the following command:
$ config -x | grep COMPAT_LINUX
options COMPAT_LINUX # binary compatibility with Linux
(Configuration data will be available if the given kernel was compiled with either INCLUDE_CONFIG_FILE or INCLUDE_JUST_CONFIG options.)
Alternatively you can search across the Sysctl tree:
$ sysctl -a | grep emul emul.linux.kern.ostype = Linux emul.linux.kern.osrelease = 2.4.18 emul.linux.kern.osversion = #0 Wed Feb 20 20:00:02 CET 2002
Note that the NetBSD documentation covers extensively this topic, so if you run into trouble, please consult the respective pages.
Linux shared libraries
Most of the times, applications are linked against shared libraries, and for Linux applications, Linux shared libraries are needed. You could get the shared libraries from any Linux distribution theoretically, as long as they are not too outdated, but the suggested method is to use the pkgsrc system.
This package supports running ELF binaries linked with glibc2 which don't require X11 shared libraries.
cd /usr/pkgsrc/emulators/suse100_base/
# make install clean
This package contains some old shared libraries required for backwards compatibility.
# cd /usr/pkgsrc/emulators/suse100_compat/
# make install clean
This package supports running ELF binaries linked with glibc2 which require X11 shared libraries.
# cd /usr/pkgsrc/emulators/suse100_x11/
# make install clean
Installing Sun's JDK
NetBeans IDE 6.0 requires a Java SE JDK, version 5 or 6. lang/jdk6 unfrotunately fails to run NetBeans with segmentation fault errors, so we will use lang/jdk15.
Next we modify the make configuration file /etc/mk.conf
and add ACCEPTABLE_LICENSES+=jdk13-license
, to accept the jdk license. You can do that with the following command:
# echo "ACCEPTABLE_LICENSES+=jdk13-license" >> /etc/mk.conf
We will manually download the Linux self-extracting jdk file jdk-1_5_0_12-linux-i586.bin
from here, and put it in /usr/pkgsrc/distfiles
. But since jdk15 depends on jre15, we have to download the Linux self-extracting jre file jre-1_5_0_11-linux-i586.bin
from here and put it in /usr/pkgsrc/distfiles
aswell.
$ ls -l .bin -rwxr-xr-x 1 user wheel 49622107 Dec 23 15:44 jdk-1_5_0_12-linux-i586.bin -rwxr-xr-x 1 user wheel 17138633 Dec 23 15:18 jre-1_5_0_12-linux-i586.bin*
Now we are ready to install jdk:
# cd /usr/pkgsrc/lang/sun-jdk15/
# make install clean
/proc filesystem
Some Linux programs like Netbeans rely on a Linux-like /proc filesystem. The NetBSD procfs filesystem can emulate a /proc filesystem that contains Linux-specific pseudo-files. We now manually mount the proc file system.
# mount_procfs -o linux procfs /usr/pkg/emul/linux/proc
and verify it with:
# mount
/dev/wd0g on / type lfs (local)
ptyfs on /dev/pts type ptyfs (local)
tmpfs on /tmp type tmpfs (local)
procfs on /usr/pkg/emul/linux/proc type procfs (local)
You may have NetBSD mounted it automatically during the booting process, by modifying the /etc/fstab file by adding the following line:
procfs /usr/pkg/emul/linux/proc procfs rw,linux
Please be careful with your /etc/fstab
. Wrong entries may lead to an unbootable system.
Data segment size
In usr/pkgsrc/lang/sun-jdk15/MESSAGE.NetBSD
it is stated that "The maximum data segment size assigned to your user must be at least 262144". It is known, that this value isn't enough. Thus enter:
$ ulimit -d 400000
You may want to put this into your .profile
In case you encounter runtime errors of the following form:
[stathis@netbsd ~] ./netbeans/bin/netbeans --jdkhome /usr/pkg/java/sun-1.5
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
[stathis@netbsd ~]
try increasing the data segment size.
Running Netbeans
You may now run Netbeans, by typing:
[stathis@netbsd ~] ./netbeans/bin/netbeans --jdkhome /usr/pkg/java/sun-1.5
Note that you can edit netbeans/etc/netbeans.conf
and add the following line:
netbeans_jdkhome="/usr/pkg/java/sun-1.5"
so you won't have to explicitly set the location of J2SE JDK everytime you run Netbeans.
Contents
Things needed
the latest stable NetBSD/i386 installation. The archivers/{zip,unzip} packages. The linux emulation packages suse_base and suse_compat.
If you want to run binary modules (MEX), you also need the suse_devel-7.3 package; that is not (yet?) available as suse9.x version. Matlab 14.3 ships with a JDK 1.5 which runs nicely on netbsd-3, so no additional JRE needed. The three Matlab 7 / R14 Unix installation CDROMs. The patches listed below. A few GBytes of diskspace.
Work to do
A Short Cut
While the following steps are kept for documentation and your own experiments, there is a tar-ball available with the patches and a shell script to apply them.
Unpack, change to the patch_matlabcd directory, call sh patch-matlab.sh -m /path/to/cdcontent, optionally add '-v' for verbose output.
Footwork
- Copy the three CDROMs to the harddisk (directories CD1, CD2, CD3).
- mount_union(8) a scratch directory on top of the directory that contains CD[123]. This scratch directory will contain all the changes that you make to the content of the CDs.
- chdir to CD1
- Apply the patch
--- /u/software/payware/unix/matlab-r14/CD1/install 2004-09-13 14:56:28.000000000 +0200
+++ install 2004-09-14 11:04:26.000000000 +0200
@@ -516,6 +516,19 @@
Darwin) # Mac OS X
ARCH="mac"
;;
+ NetBSD) # NetBSD (Linux emul)
+ case "`/usr/bin/uname -m`" in
+ i*86)
+ ARCH="glnx86"
+ ;;
+ ia64)
+ ARCH="glnxi64"
+ ;;
+ x86_64)
+ ARCH="glnxa64"
+ ;;
+ esac
+ ;;
esac
fi
return 0
@@ -730,6 +743,19 @@
Darwin) # Mac OS X
ver=`/usr/bin/uname -r | awk '{print $1}'`
;;
+ NetBSD) # NetBSD (Linux emul)
+ case $Arch in
+ glnx86)
+ ver=`/emul/linux/lib/libc.so.6 | head -1 | sed -e "s/^[^0-9]*//" -e "s/[ ,].*$//"`
+ ;;
+ glnxi64)
+ ver=2.2.4
+ ;;
+ glnxa64)
+ ver=`/emul/linux/lib64/libc.so.6 | head -1 | sed -e "s/^[^0-9]*//" -e "s/[ ,].*$//"`
+ ;;
+ esac
+ ;;
esac
fi
#
to the ./install script (assuming /emul/linux is a symlink to the root of the linux emulation setup). Apply equally to CD2 and CD3. Apply the patch
--- /u/software/payware/unix/matlab-r14/CD1/update/install/arch.sh 2004-09-13 14:54:55.000000000 +0200
+++ update/install/arch.sh 2004-09-14 11:38:19.000000000 +0200
@@ -185,6 +185,19 @@
Darwin) # Mac OS X
ARCH="mac"
;;
+ NetBSD) # NetBSD (Linux emul)
+ case "`/usr/bin/uname -m`" in
+ i*86)
+ ARCH="glnx86"
+ ;;
+ ia64)
+ ARCH="glnxi64"
+ ;;
+ x86_64)
+ ARCH="glnxa64"
+ ;;
+ esac
+ ;;
esac
fi
return 0
to ./update/install/arch.sh. Apply equally to CD2 and CD3. Pick the ./update/pd/install/tar.cmp zip archive from CD1, and unpack it.
Archive: tar.cmp
Length Date Time Name
-------- ---- ---- ----
41864 09-14-04 13:10 install_matlab
0 09-14-04 13:08 update/
0 09-14-04 13:08 update/install/
0 09-14-04 13:08 update/install/scripts/
676 07-10-97 05:11 update/install/scripts/abort.sh
2575 01-11-99 20:24 update/install/scripts/actualp.sh
6230 09-14-04 13:11 update/install/scripts/arch.sh
7329 07-27-00 21:42 update/install/scripts/batch1.sh
2597 01-11-99 20:24 update/install/scripts/batch4.sh
6985 01-29-03 21:28 update/install/scripts/bld_lbin.sh
9466 12-11-03 19:21 update/install/scripts/bld_sbin.sh
789 10-23-95 19:07 update/install/scripts/center.sh
530 12-11-03 19:21 update/install/scripts/cleanup.sh
486 01-11-99 20:24 update/install/scripts/clearsc.sh
848 01-11-99 20:24 update/install/scripts/cont.sh
449 01-11-99 20:24 update/install/scripts/echon.sh
2167 11-01-02 03:33 update/install/scripts/fin.sh
4590 03-22-04 06:07 update/install/scripts/genpathdef.sh
3256 12-11-03 19:21 update/install/scripts/intro_l.sh
5376 12-11-03 19:21 update/install/scripts/intro_s.sh
20832 01-09-04 18:30 update/install/scripts/lm.sh
6718 12-11-03 19:21 update/install/scripts/local.sh
3590 12-11-03 19:21 update/install/scripts/main.sh
1115 12-11-03 19:21 update/install/scripts/mapname.sh
1502 01-11-99 20:24 update/install/scripts/netruser.sh
1754 10-23-98 20:50 update/install/scripts/oldname.sh
5267 11-01-02 03:33 update/install/scripts/options.sh
6853 08-31-00 18:09 update/install/scripts/perm.sh
6585 06-12-02 22:21 update/install/scripts/ruser.sh
784 01-11-99 20:24 update/install/scripts/searchp.sh
3303 12-11-03 19:21 update/install/scripts/ucleanpe.sh
3295 12-11-03 19:21 update/install/scripts/update.sh
5395 12-11-03 19:21 update/install/scripts/util.sh
8565 12-11-03 19:21 update/install/scripts/verifyp.sh
-------- -------
171771 34 files
Apply the patch
--- install_matlab.orig 2005-08-12 10:23:52.000000000 +0200
+++ install_matlab 2005-09-30 14:27:23.000000000 +0200
@@ -552,6 +552,19 @@
Darwin) # Mac OS X
ARCH="mac"
;;
+ NetBSD) # NetBSD (Linux emul)
+ case "`/usr/bin/uname -m`" in
+ i*86)
+ ARCH="glnx86"
+ ;;
+ ia64)
+ ARCH="glnxi64"
+ ;;
+ x86_64)
+ ARCH="glnxa64"
+ ;;
+ esac
+ ;;
esac
fi
return 0
@@ -795,6 +808,19 @@
Darwin) # Mac OS X
ver=`/usr/bin/uname -r | awk '{print $1}'`
;;
+ NetBSD) # NetBSD (Linux emul)
+ case $Arch in
+ glnx86)
+ ver=`/emul/linux/lib/libc.so.6 | head -1 | sed -e "s/^[^0-9]*//" -e "s/[ ,].*$//"`
+ ;;
+ glnxi64)
+ ver=2.2.4
+ ;;
+ glnxa64)
+ ver=`/emul/linux/lib64/libc.so.6 | head -1 | sed -e "s/^[^0-9]*//" -e "s/[ ,].*$//"`
+ ;;
+ esac
+ ;;
esac
fi
#
to install_matlab, and the patch
--- orig/update/install/scripts/arch.sh 2004-04-04 10:20:31.000000000 +0200
+++ update/install/scripts/arch.sh 2004-09-14 13:11:17.000000000 +0200
@@ -185,6 +185,19 @@
Darwin) # Mac OS X
ARCH="mac"
;;
+ NetBSD) # NetBSD (Linux emul)
+ case "`/usr/bin/uname -m`" in
+ i*86)
+ ARCH="glnx86"
+ ;;
+ ia64)
+ ARCH="glnxi64"
+ ;;
+ x86_64)
+ ARCH="glnxa64"
+ ;;
+ esac
+ ;;
esac
fi
return 0
(same as above) to update/install/scripts/arch.sh. Re-zip the contents of the archive
[hf@venediger] /var/tmp # zip -r tar.cmp install_matlab update
adding: install_matlab (deflated 77%)
adding: update/ (stored 0%)
adding: update/install/ (stored 0%)
adding: update/install/scripts/ (stored 0%)
adding: update/install/scripts/abort.sh (deflated 53%)
adding: update/install/scripts/actualp.sh (deflated 63%)
adding: update/install/scripts/arch.sh (deflated 69%)
adding: update/install/scripts/batch1.sh (deflated 70%)
adding: update/install/scripts/batch4.sh (deflated 60%)
adding: update/install/scripts/bld_lbin.sh (deflated 75%)
adding: update/install/scripts/bld_sbin.sh (deflated 80%)
adding: update/install/scripts/center.sh (deflated 55%)
adding: update/install/scripts/cleanup.sh (deflated 42%)
adding: update/install/scripts/clearsc.sh (deflated 41%)
adding: update/install/scripts/cont.sh (deflated 53%)
adding: update/install/scripts/echon.sh (deflated 39%)
adding: update/install/scripts/fin.sh (deflated 78%)
adding: update/install/scripts/genpathdef.sh (deflated 66%)
adding: update/install/scripts/intro_l.sh (deflated 80%)
adding: update/install/scripts/intro_s.sh (deflated 85%)
adding: update/install/scripts/lm.sh (deflated 81%)
adding: update/install/scripts/local.sh (deflated 78%)
adding: update/install/scripts/main.sh (deflated 68%)
adding: update/install/scripts/mapname.sh (deflated 54%)
adding: update/install/scripts/netruser.sh (deflated 62%)
adding: update/install/scripts/oldname.sh (deflated 53%)
adding: update/install/scripts/options.sh (deflated 72%)
adding: update/install/scripts/perm.sh (deflated 74%)
adding: update/install/scripts/ruser.sh (deflated 82%)
adding: update/install/scripts/searchp.sh (deflated 47%)
adding: update/install/scripts/ucleanpe.sh (deflated 63%)
adding: update/install/scripts/update.sh (deflated 70%)
adding: update/install/scripts/util.sh (deflated 70%)
adding: update/install/scripts/verifyp.sh (deflated 78%)
to the same name. Copy to CD1, CD2 and CD3. Rinse and repeat.
- And that's about it. Provide a proper license.dat, and you are ready for installation. chdir to the installation directory, call the /path/to/install script from CD1, and click OK a few times.
At the end of the installation, there is a spurious rm -f with empty argument that can safely be ignored. If you find out where it hides, please tell me.
- OK, there's still the license manager...
FlexLM comes with a byzantine set of shell script tools. Of course, there is one more arch.sh to patch, this time under {MATLABHOME}/etc/util/arch.sh. Then create /usr/tmp, preferably as a symlink to /var/tmp, because FlexLM and the MLM backend have that hard-coded for logs, locks, preferences. Make sure you have got a proper license file in place - the procedure is well-documented. After that, you can either start the flexlm daemon from /etc/rc.local by using {MATLABHOME}/etc/rc.lm.glnx86 start|stop, or with the following rc.d script :
#!/bin/sh
#
# $Id: how_to_run_matlab_r14.3_on_netbsd__92__i386.mdwn,v 1.2 2012/02/05 07:14:36 schmonz Exp $
#
# PROVIDE: flexlm
# REQUIRE: DAEMON
. /etc/rc.subr
name="flexlm"
rcvar=$name
flexlm_user="flexlm"
matlabroot="/path/to/matlabr14"
lm_license="${matlabroot}/etc/license.dat"
required_files="${lm_license}"
command="${matlabroot}/etc/glnx86/lmgrd"
command_args="-c ${lm_license} -2 -p -l +/var/log/flexlmlog"
load_rc_config $name
run_rc_command "$1"
-- you need to create a user to run flexlm as, and own the logfile to him.
Other patches
arch.sh
For whatever reason there may be, a Matlab 14 installation has three instances of the arch.sh script that works out what platform Matlab is running on:
> find . -name arch.sh -print
./bin/util/arch.sh
./etc/util/arch.sh
./update/install/scripts/arch.sh
And (of course) the one that is sourced by {MATLABROOT}/bin/matlab is the original version which you should replace by e.g. {MATLABROOT}/etc/util/arch.sh once you get tired of adding -glnx86 to every matlab invocation.
MEX
If you want to build binary modules with mex you need to make sure those modules (libraries) are linux libraries since you cannot use native NetBSD libraries with a program running in emulation. Make sure you install the suse_devel package as listed above, and apply this patch to {MATLABROOT}/toolbox/matlab/general/mex.m. It makes sure the mex shell-script (which calls the compiler) is executed by the linux shell.
Sound
Add a symlink to the native /dev/sound like
[hf@dreispitz] /<1>linux/dev > pwd
/usr/pkg/emul/linux/dev
[hf@dreispitz] /<1>linux/dev > ll audio
0 lrwx------ 1 root wheel 13 Sep 10 03:25 audio -> /../dev/sound
Linux pthread libs
Matlab 13, at least, expects the pthread shared library under /usr/lib whereas NetBSD's suse*_base package installs it under /lib. You need to manually add
[hf@dreispitz] /<2>usr/lib > ll libpthread.so*
0 lrwxr-xr-x 1 root wheel 25 Jul 22 17:35 libpthread.so -> ../../lib/libpthread.so.0
0 lrwxr-xr-x 1 root wheel 25 Jul 22 17:35 libpthread.so.0 -> ../../lib/libpthread.so.0
LD_LIBRARY_PATH
As a Linux binary, Matlab makes heavy use of the LD_LIBRARY_PATH environment variable to access shared libraries. For a shell exit, it populates LD_LIBRARY_PATH like
>> !echo $LD_LIBRARY_PATH
/opt/matlabr14/sys/os/glnx86:/opt/matlabr14/bin/glnx86:/opt/matlabr14/extern/lib/glnx86:/opt/matlabr14/sys/java/jre/glnx86/jre1.5.0/lib/i386/native_threads:/opt/matlabr14/sys/java/jre/glnx86/jre1.5.0/lib/i386/client:/opt/matlabr14/sys/java/jre/glnx86/jre1.5.0/lib/i386
>>
NetBSD ELF binaries do not normally use LD_LIBRARY_PATH; instead, NetBSD compiles fixed library paths into shared libraries and applications. Nevertheless, non-setuid NetBSD binaries still look at LD_LIBRARY_PATH. The above path apparently confuses complex NetBSD binaries like NEdit, XEmacs, Mozilla, which try to pick up Linux libraries instead of the native ones. As a workaround, unset LD_LIBRARY_PATH before calling the application, either with per-application wrapper scripts, or a generic wrapper script like
#!/bin/sh
unset LD_LIBRARY_PATH
exec "$@"
OpenGL
Commands like bench that use OpenGL result in a kernel panic . Christos Zoulas has provided a kernel patch. It is against 2005-10-10 NetBSD-current source and applies with offsets to the netbsd-3 branch; apply from within sys/arch/i386/.
The patch avoids the panic; what functionality exactly is missing remains to be seen.
Workaround: The Matlab command opengl software enforces software rendering which is slower than using hardware support - but appears to work, unlike the latter. You still need the kernel patch, though, as any OpenGL operation will panic otherwise.
Issues left
Why Mathworks duplicate their set of install scripts everywhere and still keep a zipped copy around to drop on top of the installation is beyond me. The diffs above are clean and minimal - it's just that you have to have five instances of each.
--Hauke Fath
Contents
Things needed
A NetBSD/i386 installation. A Maple install cd (hybrid version for Windows/Mac OS X/Linux).
A linux emulation package. procfs turned on.
I use Maple 10 on NetBSD 3.1 with the suse10 package from pkgsrc.
Install Maple
Mount the CD.
From the user that will be using the maple install: run installMapleLinuxSU from the root directory on the CD.
Follow through the steps. Remember to choose an install folder you have write access to. I will use my home folder.
Upon finishing the install process you will be asked to activate Maple. I advise that you activate it now instead of trying to later.
Quit the installer.
Tell Maple your OS
Maple uses ?uname to detect the system type. Running Maple now will result in it telling you that your operating system is unsupported. We need to tell Maple that our system is linux so we can run it under emulation.
Using your favorite text editor open the file ~/maple##/bin/maple.system.type
This file is a script that runs at startup. Looking at the file we see that many different system types can be detected and launched. The one we wish to use is bin.IBM_INTEL_LINUX
There are two ways of doing this:
1: We can add a NetBSD section to the script. Just sneak it in under the Darwin entry:
"Darwin")
# the OSX case
MAPLE_BIN="bin.APPLE_PPC_OSX"
;;
# OUR ADDED SECTION
"NetBSD")
MAPLE_BIN="bin.IBM_INTEL_LINUX"
;;
# END OF OUR SECTION
*)
2: Add one line just above the bottom:
# OUR ADDED LINE
$MAPLE_BIN="bin.IBM_INTEL_LINUX"
# END LINE
echo $MAPLE_BIN
exit 0
Launch Maple
From the ~/maple##/bin directory launch either maple or xmaple.
Enjoy your NetBSD Maple math fun!
Contents
Introduction
The NetBSD installation is not big, but it can be made smaller. Here are a few tricks to make the base of user space, the C standard library libc, a bit smaller for dynamically linked systems. These trials were done on a cross compiled NetBSD current ARMv6 branch.
First the user space is built with default options. Then the cross compiler script $TOOLDIR/bin/nbmake-evbarm was used to clean and rebuild the libc with special options. The new libc binary was then copied to the target file system and a smoke test of booting the ARM development board was done. If /sbin/init and /bin/sh managed with the new libc, the test was a success while everything else was a failure.
The result is a crippled libc and /lib/libc.so.12.159 file size reduction from 1164 to 692 kilobytes. Run time memory usage is harder to predict since it depends on what parts of in memory libc are actually used by the processes, but at least text and data sections reported by the size utility give some idea.
Build options
Default -O2, file size 1164025
-O2 optimization is used by default and the libc file size after a ./build.sh -U -m evbarm build is:
-r--r--r-- 1 test test 1164025 2008-04-18 08:23 obj/destdir.evbarm/lib/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
931608 25728 64332 1021668 f96e4 obj/libc.so.12.159
-O1, file size 1159845
If the libc is build with CFLAGS=-O1, the ELF shared object file size is:
-rw-r--r-- 1 test test 1159845 Apr 19 09:20 lib/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
927436 25728 64332 1017496 f8698 obj/libc.so.12.159
Linker strip, file size 1065624
If the -O1 build is stripped and size optimized by the linker with LDFLAGS=-Wl,-O1\ -Wl,-s, the file size reduces to:
-rwxr-xr-x 1 test test 1065624 2008-04-19 13:28 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
923316 25728 64332 1013376 f7680 obj/libc.so.12.159
-Os, file size 1094281
The gcc compiler can optimize binaries for size with CFLAGS=-Os:
-rwxr-xr-x 1 test test 1094281 2008-04-19 10:56 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
861864 25736 64332 951932 e867c obj/libc.so.12.159
Manual strip, file size 1004180
The binary can then be stripped manually:
$ $TOOLDIR/bin/arm--netbsdelf-strip -s obj/libc.so.12.159
$ ls -l obj/libc.so.12.159
-rwxr-xr-x 1 test test 1004180 2008-04-19 11:02 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
861864 25736 64332 951932 e867c obj/libc.so.12.159
Linker strip, file size 1000060
The -Os compiled binary is smaller with linker based strip and optimization where LDFLAGS=-Wl,-O1\ -Wl,-s than with a manual strip:
-rwxr-xr-x 1 test test 1000060 2008-04-19 12:07 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
857744 25736 64332 947812 e7664 obj/libc.so.12.159
Feature removal
In addition to compiler flags CFLAGS=-Os LDFLAGS=-Wl,-O1\ -Wl,-s, special feature flags can be used to strip features out of libc and reduce its size. Some feature flags, as documented by BUILDING and share/mk/bsd.README, are supported for the whole user space.
SCCS version strings, file size 953136
SCCS version strings are normally embedded into object file, but they be removed by following changes in lib/libc/Makefile:
--- lib/libc/Makefile.inc 3 Jun 2007 17:36:08 -0000 1.3
+++ lib/libc/Makefile.inc 19 Apr 2008 11:01:23 -0000
@@ -24,7 +24,8 @@
.include <bsd.own.mk>
WARNS=4
-CPPFLAGS+= -D_LIBC -DLIBC_SCCS -DSYSLIBC_SCCS -D_REENTRANT
+#CPPFLAGS+= -D_LIBC -DLIBC_SCCS -DSYSLIBC_SCCS -D_REENTRANT
+CPPFLAGS+= -D_LIBC -D_REENTRANT
.if (${USE_HESIOD} != "no")
CPPFLAGS+= -DHESIOD
The resulting libc binary finally goes below the one megabyte mark:
-rwxr-xr-x 1 test test 953136 2008-04-19 13:54 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
857744 25736 64332 947812 e7664 obj/libc.so.12.159
Hesiod name service, file size 942468
Hesiod), a DNS based database service, support can be removed from libc with USE_HESIOD=no MKHESIOD=no build variables and result is:
-rwxr-xr-x 1 test test 942468 2008-04-19 14:16 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
847625 25252 62180 935057 e4491 obj/libc.so.12.159
Yellow Pages (YP), file size 917368
Yellow Pages (YP) or Network Information Service (NIS) directory service support can be removed with USE_YP=no MKYP=no variables:
-rwxr-xr-x 1 test test 917368 2008-04-19 14:29 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
824328 24488 58944 907760 dd9f0 obj/libc.so.12.159
IPv6 support, file size 909272
IPv6 support can be removed with USE_INET6=no:
-rwxr-xr-x 1 test test 909272 2008-04-19 14:48 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
816537 24368 58944 899849 dbb09 obj/libc.so.12.159
Stack smashing protection (SSP), file size 894764
SSP buffer overflow protection from the GCC compiler can be disabled with USE_SSP=no and the libc binary size goes below 900k:
-rwxr-xr-x 1 test test 894764 2008-04-19 15:02 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
802029 24368 58944 885341 d825d obj/libc.so.12.159
Remote procedure call (RPC), file size 806036
RPC support can be disabled with MKRPC=no variable and a patch like this:
--- lib/libc/Makefile 9 Jan 2008 01:33:52 -0000 1.131.4.1 +++ lib/libc/Makefile 23 Apr 2008 13:04:42 -0000 @@ -74,7 +80,10 @@ .endif .include "${.CURDIR}/regex/Makefile.inc" .include "${.CURDIR}/resolv/Makefile.inc" +MKRPC?= yes +.if (${MKRPC} != "no") .include "${.CURDIR}/rpc/Makefile.inc" +.endif .include "${.CURDIR}/ssp/Makefile.inc" .include "${.CURDIR}/stdio/Makefile.inc" .include "${.CURDIR}/stdlib/Makefile.inc"
As a result the libc size goes down to 806 kilobytes:
-rw-r--r-- 1 test test 806036 2008-04-23 16:00 lib/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
717964 22624 58500 799088 c3170 obj/libc.so.12.159
Execution profiling control, file size 801720
Profiling control can be removed from libc with MKGMON=no and a patch like:
--- lib/libc/Makefile 9 Jan 2008 01:33:52 -0000 1.131.4.1 +++ lib/libc/Makefile 24 Apr 2008 08:07:08 -0000 @@ -58,7 +58,10 @@ .include "${.CURDIR}/dlfcn/Makefile.inc" .include "${.CURDIR}/gdtoa/Makefile.inc" .include "${.CURDIR}/gen/Makefile.inc" +MKGMON?= yes +.if (${MKGMON} != "no") .include "${.CURDIR}/gmon/Makefile.inc" +.endif .include "${.CURDIR}/hash/Makefile.inc" .include "${.CURDIR}/iconv/Makefile.inc" .include "${.CURDIR}/inet/Makefile.inc"
And libc size goes down around 4 kilobytes:
-rwxr-xr-x 1 test test 801720 2008-04-24 10:57 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
713959 22500 58436 794895 c210f obj/libc.so.12.159
Citrus I18N, file size 767560
Citrus I18N support can be removed with MKCITRUS=no, lib/libc/locale/setlocale.c patch and a makefile patch:
--- Makefile 9 Jan 2008 01:33:52 -0000 1.131.4.1 +++ Makefile 25 Apr 2008 07:51:20 -0000 @@ -53,12 +53,18 @@
.include "${.CURDIR}/../../common/lib/libc/Makefile.inc"
.include "${.CURDIR}/db/Makefile.inc"
+MKCITRUS?= yes
+.if (${MKCITRUS} != "no")
.include "${.CURDIR}/citrus/Makefile.inc"
+.endif
.include "${.CURDIR}/compat-43/Makefile.inc"
.include "${.CURDIR}/dlfcn/Makefile.inc"
.include "${.CURDIR}/gdtoa/Makefile.inc"
The libc binary is now below 800 kilobytes:
-rwxr-xr-x 1 test test 767560 2008-04-25 10:01 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
685150 18696 56968 760814 b9bee obj/libc.so.12.159
MD2, RMD160, SHA1, SHA2, MD4 and MD5 cryptographic hash functions, file size 723780
All cryptographic hash functions can be removed from the libc with MKMD2=no MKRMD160=no MKSHA1=no MKSHA2=no MKMD=no and lib/libc/hash/Makefile.inf and lib/libc/Makefile patches:
--- lib/libc/hash/Makefile.inc 27 Oct 2006 18:29:21 -0000 1.11 +++ lib/libc/hash/Makefile.inc 30 Apr 2008 09:10:21 -0000 @@ -4,8 +4,20 @@ # hash functions .PATH: ${ARCHDIR}/hash ${.CURDIR}/hash
+MKMD2?= yes
+.if (${MKMD2} != "no")
.include "${.CURDIR}/hash/md2/Makefile.inc"
+.endif
+MKRMD160?= yes
+.if (${MKRMD160} != "no")
.include "${.CURDIR}/hash/rmd160/Makefile.inc"
+.endif
+MKSHA1?= yes
+.if (${MKSHA1} != "no")
.include "${.CURDIR}/hash/sha1/Makefile.inc"
+.endif
+MKSHA2?= yes
+.if (${MKSHA2} != "no")
.include "${.CURDIR}/hash/sha2/Makefile.inc"
+.endif
--- lib/libc/Makefile 9 Jan 2008 01:33:52 -0000 1.131.4.1
+++ lib/libc/Makefile 30 Apr 2008 09:11:25 -0000
.include "${.CURDIR}/inet/Makefile.inc"
.include "${.CURDIR}/isc/Makefile.inc"
.include "${.CURDIR}/locale/Makefile.inc"
+MKMD?= yes
+.if (${MKMD} != "no")
.include "${.CURDIR}/md/Makefile.inc"
+.endif
.include "${.CURDIR}/misc/Makefile.inc"
.include "${.CURDIR}/net/Makefile.inc"
.include "${.CURDIR}/nameser/Makefile.inc"
libc size reduces to:
-rwxr-xr-x 1 test test 723780 2008-04-30 12:03 obj/libc.so.12.159
Sections reported by size utility:
text data bss dec hex filename
642476 18456 56968 717900 af44c obj/libc.so.12.159
Misc, Native Language Support (NLS), Regular Expression and Stack Smashing Protection, file size 691884
Indeed, misc object, NLS, regular expression (IEEE Std 1003.2-1992 (“POSIX.2”) regular expressions) and Stack Smashing Protection (SSP) support library are easily removed from the build process with MKMISC=no MKNLS=no MKREGEX=no MKSSP=no and an obvious patch to lib/libc/Makefile:
.include "${.CURDIR}/md/Makefile.inc" +.endif +MKMISC?= yes +.if (${MKMISC} != "no") .include "${.CURDIR}/misc/Makefile.inc" +.endif .include "${.CURDIR}/net/Makefile.inc" .include "${.CURDIR}/nameser/Makefile.inc" +MKNLS?= yes +.if (${MKNLS} != "no") .include "${.CURDIR}/nls/Makefile.inc" +.endif .if (${MACHINE_ARCH} != "alpha") && (${MACHINE_ARCH} != "sparc64") .include "${.CURDIR}/quad/Makefile.inc" .endif +MKREGEX?= yes +.if (${MKREGEX} != "no") .include "${.CURDIR}/regex/Makefile.inc" +.endif .include "${.CURDIR}/resolv/Makefile.inc" ... .include "${.CURDIR}/rpc/Makefile.inc" +.endif +MKSSP?= yes +.if (${MKSSP} != "no") .include "${.CURDIR}/ssp/Makefile.inc" +.endif .include "${.CURDIR}/stdio/Makefile.inc" .include "${.CURDIR}/stdlib/Makefile.inc" .include "${.CURDIR}/string/Makefile.inc"
Boot with sbin/init still works as does bin/sh, but user friendly programs like bin/ps and bin/ls now fail due to missing symbols:
# ls -l lib/libc.so.12.159
/lib/libc.so.12: Undefined PLT symbol "__fgets_chk" (symnum = 156)
# ls lib/libc.so.12.159
lib/libc.so.12.159
# ps aux
/lib/libc.so.12: Undefined PLT symbol "__strcat_chk" (symnum = 207)
# ps
ps: warning: /var/run/dev.db: /lib/libc.so.12: Undefined PLT symbol "_catopen" )
File size now:
-rwxr-xr-x 1 test test 691884 2008-04-30 14:18 obj/libc.so.12.159
Segment sizes reported by size utility:
text data bss dec hex filename
614238 17284 56928 688450 a8142 obj/libc.so.12.159
Ideas
While a few compiler and feature options were a tried out, a number of new ideas were found. Some of these were quickly tried out, but they resulted in the build or smoke/boot test failures.
Compiler options
Thumb instruction set, file size 585440
Compile to 16 bit THUMB instruction set instead of normal 32 bit
- Whole user space needs to be build with -mthumb-interwork
- CPUFLAGS build variable should contain -mthumb and -mthumb-interwork
- If some files (like atomic_init_testset.c) need arm32 code due to embedded ARM assembly or other tool chain issues, the CPUFLAGS build variable can be overridden: Per file build options override
- libc from matt-armv6 branch builds with -mthumb but fails to run with SIGILL: Thumb compilation discussion on port-arm
After a successfull compile with atomic_init_testset.o compiled with -mthumb-interwork only, the file size is:
-rwxr-xr-x 1 mira mira 585440 2008-05-12 11:20 obj/libc.so.12.159
size utility reports:
text data bss dec hex filename
507462 17616 56928 582006 8e176 obj/libc.so.12.159
Feature removals
Database support from lib/libc/db
- sbin/init and possibly others depend on Berkeley DB support
- a custom init and bin/sh propably work without it
Regular memory allocator instead of JEMALLOC
USE_JEMALLOC: (share/mk/bsd.README) If "no", disables building the "jemalloc" allocator designed for improved performance with threaded applications.
- seems to require rebuild of user space, but even after that init dies with:
warning: no /dev/console panic: init died (signal 0, exit 12) Stopped in pid 1.1 (init) at netbsd:cpu_Debugger+0x4: bx r14
Reduce resolver size in lib/libc/resolver
- remove IPv6 support
- remove debug features
Use libhack
- Use size optimized distrib/utils/libhack with/instead of libc
Alternatives
Crunchgen
crunchgen can be used to create statically (at compile time) linked executables, which include only those object files, which are really used by the executable.
- Use crunchgen or related build infra to build a dynamic library from only those object files that are actually required by the applications
References
- The BUILDING and share/mk/bsd.README files in addition to libc specific makefiles and sources contain information on the different build and feature options.
- gcc and ld manual pages
- Shrinking NetBSD - deep final distribution re-linker, discussion on tech-userlevel
- Reducing libc size, discussion on tech-userlevel
Contents
Introduction
The NetBSD kernel is not big, but it can be made smaller. Generic size reduction steps are:
- compile with size optimizations
- removing debugging and logging features
- remove non-essentian functionality
Here is an example which shows how these steps were applied to a NetBSD kernel for an ARM development board. As a result the kernel size got reduced from 2251 to 1124 kilobytes.
Since kernel executable size is not the only measure of kernel memory usage, some tools and their example output are also shown.
Example: OMAP 2420
The board receives the kernel via TFTP and the root filesystem is on NFS. Most important components on the board are the serial ports and the Ethernet adapter.
NetBSD current has a default kernel configuration for the board: sys/arch/evbarm/conf/TISDP2420. This file includes a number of other configuration files, but after compilation the resulting configuration is available as sys/arch/evbarm/compile/obj/TISDP2420/config_file.h.
The configuration file structure and most options are explained in options(4) manual page. In this example options were changed by adding a no options OPTION_NAME directive or an options OPTION_NAME directive to the end of the default configuration file. By adding these to end of the file, previous declarations from included options files were easily replaced without changing the included files.
Default TISDP2420, 2251 kb
Default kernel is build with -O2 optimization and stripped with objdump. Many debugging options are still enabled.
$ ls -l sys/arch/evbarm/compile/obj/TISDP2420/netbsd*
-rwxr-xr-x 1 test test 2614515 2008-06-23 11:32 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 2305536 2008-06-23 11:32 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rwxr-xr-x 1 test test 13908834 2008-06-23 11:32 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.gdb
-rw-r--r-- 1 test test 3440335 2008-06-23 11:32 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
$ size sys/arch/evbarm/compile/obj/TISDP2420/netbsd
text data bss dec hex filename
1902956 339456 214740 2457152 257e40 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
Dmesg shows how much memory is available after boot, when the kernel executable been loaded to RAM and most of the RAM based data structures have been initialized:
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58032 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82536000, paddr=0x80710DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255c000, paddr=0x80740DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82582000,
paddr=0x80760gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
Optimize for size with -Os, 2059 kb
Compiler can optimize for size when build variable DEFCOPTS is set to -Os instead of the default -O2.
-rwxr-xr-x 1 test test 2458323 2008-06-25 09:48 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 2109012 2008-06-25 09:48 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rwxr-xr-x 1 test test 13390766 2008-06-25 09:48 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.gdb
-rw-r--r-- 1 test test 3446815 2008-06-25 09:48 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1764840 339540 214768 2319148 23632c sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg | egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58220 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x806e0DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80710DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80730gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
Remove debugging symbols, without -g, 2059 kb
Build variable DEBUG contains the -g flag. Build without it. Results show that the kernel was already stripped of debug symbols by objdump, so this step is not usefull.
-rwxr-xr-x 1 test test 2502667 2008-06-25 09:58 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 2109012 2008-06-25 09:58 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3238409 2008-06-25 09:58 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1764840 339540 214768 2319148 23632c sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58220 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x806e0DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80710DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80730gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
DIAGNOSTIC, 1867 kb
Kernel build without DIAGNOSTIC support.
-rwxr-xr-x 1 test test 2304678 2008-06-25 10:05 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1912404 2008-06-25 10:05 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3208060 2008-06-25 10:05 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1567336 339540 214768 2121644 205fac sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58408 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82538000, paddr=0x806b0DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255e000, paddr=0x806e0DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82584000,
paddr=0x80700gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
VERBOSE_INIT_ARM, 1867 kb
Kernel build without verbose ARM specific boot messages. VERBOSE_INIT_ARM seems to depend on DDB support.
-rwxr-xr-x 1 test test 2304616 2008-06-25 10:22 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1912404 2008-06-25 10:22 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3207741 2008-06-25 10:22 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1565468 339540 214768 2119776 205860 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58408 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82538000, paddr=0x806b0DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255e000, paddr=0x806e0DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82584000,
paddr=0x80700gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
KTRACE, 1867 kb
Kernel without KTRACE system call tracing support. KTRACE seems to depend on DDB kernel debugger support, since boot hangs without it.
-rwxr-xr-x 1 test test 2303792 2008-06-25 10:39 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1912372 2008-06-25 10:39 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3198575 2008-06-25 10:39 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1557776 339508 214704 2111988 2039f4 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58408 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82538000, paddr=0x806b0DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255e000, paddr=0x806e0DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82584000,
paddr=0x80700gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
DDB, 1867 kb
Build kernel without DDB in kernel debugger.
-rwxr-xr-x 1 test test 2260235 2008-06-25 10:42 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1911944 2008-06-25 10:42 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3076696 2008-06-25 10:42 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1512444 339080 209440 2060964 1f72a4 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
total memory = 62464 KB
avail memory = 58416 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82538000, paddr=0x806b0DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255e000, paddr=0x806e0DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82584000,
paddr=0x80700gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
COMPAT_30, 1867 kb
Kernel without NetBSD 3.0 compatibility.
-rwxr-xr-x 1 test test 2259424 2008-06-25 10:55 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1911944 2008-06-25 10:55 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3031468 2008-06-25 10:55 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1509828 339080 209440 2058348 1f686c sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58416 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82538000, paddr=0x806b0DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255e000, paddr=0x806e0DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82584000,
paddr=0x80700gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
ksyms, 1510 kb
Kernel without /dev/ksyms support.
-rwxr-xr-x 1 test test 1925248 2008-06-25 10:59 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1546376 2008-06-25 10:59 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3023265 2008-06-25 10:59 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1503148 39048 209056 1751252 1ab8d4 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58764 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x8253a000, paddr=0x80660DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x82560000, paddr=0x80680DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82586000,
paddr=0x806b0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
PTRACE, 1510 kb
Kernel without process tracing support.
-rwxr-xr-x 1 test test 1924948 2008-06-25 11:04 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1546376 2008-06-25 11:04 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3015392 2008-06-25 11:04 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1499312 39048 209056 1747416 1aa9d8 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58764 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x8253a000, paddr=0x80660DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x82560000, paddr=0x80680DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82586000,
paddr=0x806b0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
FFS, 1510 kb
Kernel without fast filesystem support.
-rwxr-xr-x 1 test test 1924948 2008-06-25 11:09 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1546376 2008-06-25 11:09 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 3015329 2008-06-25 11:09 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1499248 39048 209056 1747352 1aa998 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58764 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x8253a000, paddr=0x80660DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x82560000, paddr=0x80680DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82586000,
paddr=0x806b0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
MSDOSFS, 1509 kb
Kernel without FAT filesystem support.
-rwxr-xr-x 1 test test 1887407 2008-06-25 11:21 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1546152 2008-06-25 11:21 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2955657 2008-06-25 11:21 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1463132 38824 208672 1710628 1a1a24 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58764 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82538000, paddr=0x80660DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255e000, paddr=0x80680DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82584000,
paddr=0x806b0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
PTYFS, 1509 kb
Kernel without pseudo TTY filesystem and pseudo-device pty.
-rwxr-xr-x 1 test test 1883462 2008-06-25 11:37 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1545916 2008-06-25 11:37 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2911049 2008-06-25 11:37 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1446512 38588 208536 1693636 19d7c4 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58764 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82536000, paddr=0x80660DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255c000, paddr=0x80680DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82582000,
paddr=0x806b0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
bpfilter, 1445 kb
Kernel without Berkeley packet filter pseudo-device support.
-rwxr-xr-x 1 test test 1848968 2008-06-25 11:46 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1480372 2008-06-25 11:46 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2896335 2008-06-25 11:46 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1435984 38580 208272 1682836 19ad94 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58828 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82536000, paddr=0x80650DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255c000, paddr=0x80670DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82582000,
paddr=0x806a0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
INSECURE, 1445 kb
Kernel without integrity protection.
-rwxr-xr-x 1 test test 1848968 2008-06-25 11:54 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1480372 2008-06-25 11:54 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2896335 2008-06-25 11:54 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1435984 38580 208272 1682836 19ad94 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58828 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82536000, paddr=0x80650DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255c000, paddr=0x80670DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82582000,
paddr=0x806a0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
INET6, 1316 kb
Kernel without IPv6 support.
-rwxr-xr-x 1 test test 1666318 2008-06-25 11:57 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1348532 2008-06-25 11:57 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2666559 2008-06-25 11:57 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1271320 37812 199760 1508892 17061c sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58960 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x80630DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80650DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80680gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
MALLOC_NOINLINE, 1316 kb
Kernel without inlined malloc functions.
-rwxr-xr-x 1 test test 1666318 2008-06-25 12:01 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1348532 2008-06-25 12:01 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2666559 2008-06-25 12:01 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1271320 37812 199760 1508892 17061c sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58960 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x80630DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80650DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80680gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
PIPE_SOCKETPAIR, 1316 kb
Kernel with smaller but slower pipe implementation.
-rwxr-xr-x 1 test test 1665096 2008-06-25 12:04 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1348468 2008-06-25 12:04 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2658886 2008-06-25 12:04 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1264788 37748 199760 1502296 16ec58 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58964 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x80630DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80650DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80670gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
VMSWAP, 1316 kb
Kernel without swap support. Requires a build patch, but may still crash the kernel when uvm statistics are called from top, ps etc. programs.
--- a/sys/uvm/uvm_pdpolicy_clock.c
+++ b/sys/uvm/uvm_pdpolicy_clock.c
@@ -262,12 +262,13 @@ uvmpdpol_balancequeue(int swap_shortage)
/*
* if there's a shortage of swap slots, try to free it.
*/
-
+#if defined(VMSWAP)
if (swap_shortage > 0 && (p->pqflags & PQ_SWAPBACKED) != 0) {
if (uvmpd_trydropswap(p)) {
swap_shortage--;
}
}
+#endif /* defined(VMSWAP) */
/*
* if there's a shortage of inactive pages, deactivate.
-rwxr-xr-x 1 test test 1662968 2008-06-25 12:09 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1348392 2008-06-25 12:09 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2642042 2008-06-25 12:09 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1248392 37672 199248 1485312 16aa00 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 58968 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x80630DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80650DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80670gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
SYSVMSG, SYSVSEM, SYSVSHM, 1252 kb
Kernel without system V message queues, semaphores and shared memory.
-rwxr-xr-x 1 test test 1627140 2008-06-25 12:15 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1282748 2008-06-25 12:15 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2612942 2008-06-25 12:15 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1225204 37564 198904 1461672 164da8 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 59032 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x80620DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80640DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80660gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
rnd, 1252 kb
Kernel without /dev/random pseudo-device.
-rwxr-xr-x 1 test test 1625313 2008-06-25 12:25 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1282684 2008-06-25 12:25 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2599464 2008-06-25 12:25 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1219556 37500 197944 1455000 163398 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 59032 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x80620DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80640DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80660gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
md, 1252 kb
Kernel without support for memory devices like ramdisks.
-rwxr-xr-x 1 test test 1624021 2008-06-25 12:29 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1282588 2008-06-25 12:29 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2581393 2008-06-25 12:29 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1214024 37404 197752 1449180 161cdc sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 59032 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82537000, paddr=0x80620DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255d000, paddr=0x80640DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82583000,
paddr=0x80660gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
maxusers 2, 1252 kb
Kernel with maxusers set to two instead of the default 32.
-rwxr-xr-x 1 test test 1624021 2008-06-25 12:34 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1282588 2008-06-25 12:34 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2581393 2008-06-25 12:34 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1214024 37404 197752 1449180 161cdc sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 59548 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x824b6000, paddr=0x805a0DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x824dc000, paddr=0x805c0DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82502000,
paddr=0x805e0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
MFS, 1124 kb
Kernel without memory filesystem.
-rwxr-xr-x 1 test test 1483269 2008-06-25 12:37 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1151292 2008-06-25 12:37 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2437216 2008-06-25 12:37 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1106936 37180 197688 1341804 14796c sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 59672 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x824b3000, paddr=0x80580DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x824d9000, paddr=0x805a0DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x824ff000,
paddr=0x805d0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
COREDUMP, 1124 kb
Kernel without core dump support.
-rwxr-xr-x 1 test test 1482841 2008-06-25 13:34 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
-rwxr-xr-x 1 test test 1151292 2008-06-25 13:34 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.bin
-rw-r--r-- 1 test test 2428587 2008-06-25 13:34 sys/arch/evbarm/compile/obj/TISDP2420/netbsd.map
text data bss dec hex filename
1102708 37180 197688 1337576 1468e8 sys/arch/evbarm/compile/obj/TISDP2420/netbsd
# dmesg|egrep "memory|gpm"
total memory = 62464 KB
avail memory = 59672 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x824b3000, paddr=0x80580DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x824d9000, paddr=0x805a0DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x824ff000,
paddr=0x805d0gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
gpmc0: CS#0 valid, addr 0x04000000, size 64MB
gpmc0: CS#1 valid, addr 0x08000000, size 16MB
sm0 at gpmc0 addr 0x08000300 intr 188
Kernel memory consumption
The kernel executable size is not the only measure of kernel memory consumption. A number of userspace tools are able to show how the kernel and it's executable threads use memory while the system is running. Following chapters list some example tool output on the ARM OMAP 2420 board.
Using these tools is easy, but interpreting the numbers seems to require some indepth knowledge of the NetBSD kernel's memory management system uvm. More details on the meaning of these numbers as well as examples of found problems and countermeasures like kernel configurations and sysctl setting would be appreciated.
top
top shows overall memory usage in the system and an interesting process called system.
# top
load averages: 0.14, 0.03, 0.01 09:20:54
4 processes: 3 sleeping, 1 on CPU
CPU states: % user, % nice, % system, % interrupt, % idle
Memory: 1916K Act, 208K Wired, 996K Exec, 428K File, 52M Free
Swap:
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
51 root 43 0 976K 784K CPU 0:00 14.00% 0.68% top
0 root 125 0 0K 1008K schedule 0:00 0.00% 0.00% [system]
40 root 85 0 980K 956K wait 0:00 0.00% 0.00% sh
1 root 85 0 32K 500K wait 0:00 0.00% 0.00% init
# dmesg|grep -i mem
total memory = 62464 KB
avail memory = 58032 KB
DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82536000, paddr=0x80710DISPC: omap2_lcd_alloc_fb(): memory
allocated at vaddr=0x8255c000, paddr=0x80740DISPC: omap2_lcd_alloc_fb(): memory allocated at vaddr=0x82582000,
paddr=0x80760gpmc0 at mainbus0 base 0x6800a000: General Purpose Memory Controller, rev 2.0
# top -t
load averages: 0.00, 0.00, 0.00 up 0 days, 1:54 11:14:37
23 threads: 5 idle, 17 sleeping, 1 on CPU
CPU states: % user, % nice, % system, % interrupt, % idle
Memory: 3844K Act, 208K Wired, 1308K Exec, 2044K File, 50M Free
Swap:
PID LID USERNAME PRI STATE TIME WCPU CPU COMMAND NAME
0 6 root 223 IDLE 0:00 0.00% 0.00% [system] softser/0
0 3 root 222 IDLE 0:00 0.00% 0.00% [system] softnet/0
0 4 root 221 IDLE 0:00 0.00% 0.00% [system] softbio/0
0 5 root 220 IDLE 0:00 0.00% 0.00% [system] softclk/0
0 2 root 0 IDLE 0:00 0.00% 0.00% [system] idle/0
77 1 root 43 CPU 0:00 0.00% 0.00% top -
0 7 root 127 xcall 0:00 0.00% 0.00% [system] xcall/0
0 25 root 126 pgdaemon 0:00 0.00% 0.00% [system] pgdaemon
0 1 root 125 schedule 0:00 0.00% 0.00% [system] swapper
0 28 root 125 vmem_reh 0:00 0.00% 0.00% [system] vmem_rehash
0 27 root 125 aiodoned 0:00 0.00% 0.00% [system] aiodoned
0 9 root 125 cachegc 0:00 0.00% 0.00% [system] cachegc
0 8 root 125 vrele 0:00 0.00% 0.00% [system] vrele
0 26 root 124 syncer 0:00 0.00% 0.00% [system] ioflush
0 11 root 96 iicintr 0:00 0.00% 0.00% [system] iic0
ps
ps can show the system light weight processes/threads too with -s flag and lname displays a more sesible name:
# ps -awxs -o uid,pid,lid,nlwp,pri,ni,vsz,rss,command,lname
UID PID LID NLWP PRI NI VSZ RSS COMMAND LNAME
0 0 28 20 125 0 0 976 [system vmem_rehash
0 0 27 20 125 0 0 976 [system aiodoned
0 0 26 20 124 0 0 976 [system ioflush
0 0 25 20 126 0 0 976 [system pgdaemon
0 0 24 20 96 0 0 976 [system nfsio
0 0 23 20 96 0 0 976 [system nfsio
0 0 22 20 96 0 0 976 [system nfsio
0 0 21 20 96 0 0 976 [system nfsio
0 0 12 20 96 0 0 976 [system iic1
0 0 11 20 96 0 0 976 [system iic0
0 0 10 20 96 0 0 976 [system pmfevent
0 0 9 20 125 0 0 976 [system cachegc
0 0 8 20 125 0 0 976 [system vrele
0 0 7 20 127 0 0 976 [system xcall/0
0 0 6 20 223 0 0 976 [system softser/0
0 0 5 20 220 0 0 976 [system softclk/0
0 0 4 20 221 0 0 976 [system softbio/0
0 0 3 20 222 0 0 976 [system softnet/0
0 0 2 20 0 0 0 976 [system idle/0
0 0 1 20 125 0 0 976 [system swapper
0 1 1 1 85 0 32 500 init -
0 40 1 1 85 0 980 968 -sh -
0 62 1 1 42 0 976 676 ps -awx -
0 63 1 1 85 0 980 784 less -r -
pmap
Displaying process 0 memory map with pmap shows the kernel's memory map.
# pmap 0
80000000 17680K read/write/exec [ anon ]
81144000 4K read/write/exec [ anon ]
81145000 20K read/write/exec [ anon ]
8114A000 7328K read/write/exec [ kmem_map ]
81872000 4096K read/write/exec [ pager_map ]
81C72000 304K read/write/exec [ anon ]
81CBE000 4K read/write/exec [ anon ]
81CBF000 580K read/write/exec [ anon ]
81D50000 4096K read/write/exec [ exec_map ]
82150000 1200K read/write/exec [ phys_map ]
8227C000 2096K read/write/exec [ mb_map ]
82488000 20K read/write/exec [ anon ]
8248D000 8K read/write/exec [ uvm_aobj ]
8248F000 524K read/write/exec [ anon ]
82512000 48K read/write/exec [ uvm_aobj ]
8251E000 32K read/write/exec [ anon ]
82526000 8K read/write/exec [ uvm_aobj ]
82528000 48K read/write/exec [ anon ]
82534000 8K read/write/exec [ uvm_aobj ]
82536000 456K read/write/exec [ anon ]
825A8000 16K read/write/exec [ uvm_aobj ]
825AC000 12K read/write/exec [ anon ]
825AF000 48K read/write/exec [ uvm_aobj ]
825BB000 4K read/write/exec [ anon ]
825BC000 16K read/write/exec [ uvm_aobj ]
825C0000 56K read/write/exec [ anon ]
825CE000 8192K read/write/exec [ ubc_pager ]
82DCE000 52K read/write/exec [ anon ]
82DDB000 8K read/write/exec [ uvm_aobj ]
82DDD000 20K read/write/exec [ anon ]
82DE2000 8K read/write/exec [ uvm_aobj ]
82DE4000 4K read/write/exec [ anon ]
82DE6000 4K read/write/exec [ anon ]
82DE9000 8K read/write/exec [ uvm_aobj ]
82DEB000 8K read/write/exec [ anon ]
82DED000 8K read/write/exec [ uvm_aobj ]
82DEF000 16K read/write/exec [ anon ]
82DF7000 16K read/write/exec [ uvm_aobj ]
82DFB000 12K read/write/exec [ anon ]
82DFE000 8K read/write/exec [ uvm_aobj ]
82E00000 400K read/write/exec [ anon ]
82E68000 12K read/write/exec [ anon ]
82E70000 576K read/write/exec [ anon ]
total 48064K
vmstat
vmstat can display a lot of NetBSD memory management details and statistics, especially if the kernel is compiled with KMEMSTAT options. Output of vmstat -C -m on the ARM board.
References
- options(4)
- http://www.netbsd.org/docs/guide/en/chap-tuning.html#tuning-considerations-kernel
- Tuning NetBSD for performance
- KMEMSTAT http://mail-index.netbsd.org/tech-net/2006/03/27/0004.html
- VM tuning from swap perspective
- NetBSD Internals: Memory Management
- http://www.netbsd.org/docs/guide/en/chap-tuning.html#tuning-mtools
- How to reduce libc size
- http://mail-index.netbsd.org/port-arm/2008/06/27/msg000275.html
System Requirements
- a DVD compatible reader
- any multimedia player (for example, xine or mplayer)
- an internet connection to fetch libdvdcss files required for descrambling
Introduction
CSS (Content Scrambling System) is a scheme mainly used by copyrights holders to protect commercial instructional dvds from unauthorized copying, by literally scrambling the DVDs' content through a (weak) encryption process.
By default, under NetBSD, when installing xine or mplayer to read commercial's DVDs, the libdvdcss (needed by libavcodec to decipher the DVD's content) is not built. This guide will show you how to make and install the libdvdcss package in order to use it.
Please note that under certain legislations, such method is forbidden (especially in the US thanks to DMCA). Use libdvdcss at your own risk!
Building and using libdvdcss
Libdvdcss is already present for installation in pkgsrc. As it is illegal in certain states, pkgsrc can not install it directly without some user interaction.
Before making it, we will first need to add some information to mk.conf(5), namely the master sites from which we should fetch the libdvdcss source code. There are many, including the videolan project, which hosts libdvdcss:
# echo "LIBDVDCSS_MASTER_SITES='the website you found'" >> /etc/mk.conf
Now, start building libdvdcss (should not take more than a couple minutes):
# cd /usr/pkgsrc/multimedia/libdvdcss
# make install clean
That's it. From now on, libavcodec (and consequently, your favorite multimedia reader) should automagically use libdvdcss to read your commercial's DVDs, through dlopen(3), when required.
Contents
Introduction
NetBSD lets you mount ISO images using the vnd(4) disk driver.
The vnd driver provides a disk-like interface to a file.
Mounting the Image
# vnconfig vnd0 v7x86-0.8a.iso
# mount -t cd9660 /dev/vnd0a /mnt
# cd /mnt
# ls
COPYRIGHT README boot.cat v7x86-0.8a.tar version
INSTALL RELNOTES boot.img v7x86intro.pdf
Unmounting the Image
# umount /mnt
# vnconfig -u vnd0
Additional Information
- vnd(4) Manpage
- vnconfig(8) Manpage
- mount(8) Manpage
- umount(8) Manpage
Contents
Verify UFS support
To check whether your Linux kernel supports the UFS filesystem you may execute the following command:
$ cat /proc/filesystems nodev sysfs nodev rootfs nodev proc . . . ext3 nodev usbfs vfat ufs
The keyword nodev
in the first column means that filesystem does not require a block device to be mounted, that's why it is also called virtual filesystem. The support is either compiled inside the kernel or as a module:
$ ls -l /lib/modules/2.6.21-ARCH/kernel/fs/ufs/ufs.ko -rw-r--r-- 1 root root 84828 2007-05-25 20:11 /lib/modules/2.6.21-ARCH/kernel/fs/ufs/ufs.ko
Mount
In order to find the device that corresponds to your FFS partition, run:
- sfdisk -l
Disk /dev/hda: 155061 cylinders, 16 heads, 63 sectors/track
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/hda1 * 0+ 34536- 34537- 17406396 7 HPFS/NTFS
end: (c,h,s) expected (1023,15,63) found (1023,254,63)
/dev/hda2 34536+ 134767- 100231- 50516392+ f W95 Ext'd (LBA)
start: (c,h,s) expected (1023,15,63) found (1023,255,63)
end: (c,h,s) expected (1023,15,63) found (1023,254,63)
/dev/hda3 134767+ 144935- 10169- 5124735 a5 FreeBSD
start: (c,h,s) expected (1023,15,63) found (1023,255,63)
end: (c,h,s) expected (1023,15,63) found (1023,254,63)
/dev/hda4 144935+ 155060 10126- 5103189 a9 NetBSD
start: (c,h,s) expected (1023,15,63) found (1023,255,63)
end: (c,h,s) expected (1023,15,63) found (1023,80,63)
/dev/hda5 34536+ 102366- 67830- 34186288+ 83 Linux
start: (c,h,s) expected (1023,15,63) found (0,1,1)
end: (c,h,s) expected (1023,15,63) found (1023,254,63)
/dev/hda6 102366+ 104294 1929- 971901 82 Linux swap / Solaris
start: (c,h,s) expected (1023,15,63) found (0,1,1)
end: (c,h,s) expected (1023,15,63) found (120,254,63)
/dev/hda7 104295+ 134767- 30473- 15358108+ 83 Linux
start: (c,h,s) expected (1023,15,63) found (0,1,1)
end: (c,h,s) expected (1023,15,63) found (1023,254,63)
/dev/hda8 134767+ 143910- 9143- 4608000
/dev/hda9 143910+ 144935- 1026- 516735
/dev/hda10 144935+ 154078- 9143 4608072
/dev/hda11 154078+ 155060 983- 495117
/dev/hda12 0+ 34536- 34537- 17406396
/dev/hda13 34536+ 102366- 67830- 34186288+
/dev/hda14 102366+ 104294 1929- 971901
/dev/hda15 104295+ 144935- 40641- 20482843+
So for FreeBSD (FFSv2), we have /dev/hda3
which is equivalent to /dev/ad0s3
And for NetBSD (FFSv1), we have /dev/hda4
which is equivalent to /dev/wd0c
But these devices are whole BSD slices (BIOS partitions), not BSD partitions.
By examinating carefully sfdisk - l output, we find that: /dev/hda3
(134767+,144935-) includes /dev/hda8
(134767+,143910-) and /dev/hda9
(143910+,144935-) /dev/hda4
(144935+,155060) includes /dev/hda10
(144935+,154078-) and /dev/hda11
(154078+,155060)
And we may deduce that for FreeBSD: /dev/hda8
is equivalent to /dev/ad0s3a
(FreeBSD root partition) /dev/hda9
is equivalent to /dev/ad0s3b
(FreeBSD swap)
And for NetBSD: /dev/hda10
is equivalent to /dev/wd0a
(NetBSD root partition) /dev/hda11
is equivalent to /dev/wd0b
(NetBSD swap)
Thus FreeBSD root partition lies at /dev/hda8. First create a directory to mount FFS partition and then mount it:
# mkdir /mnt/freebsd
# mount -t ufs -o ro,ufstype=ufs2 /dev/hda8 /mnt/freebsd/
And NetBSD root partition lies at /dev/hda10
. First create a directory to mount FFS partition and then mount it:
# mkdir /mnt/netbsd
# mount -t ufs -o ro,ufstype=44bsd /dev/hda10 /mnt/netbsd/
Let's browse it:
# ls /mnt/*bsd
/mnt/freebsd:
bin cdrom COPYRIGHT dist etc lib media proc root sys usr
boot compat dev entropy home libexec mnt rescue sbin tmp var
/mnt/netbsd:
altroot etc gnome-screensave.core mnt root var
bin GENERIC kern netbsd sbin
boot GENERIC-DIAGNOSTIC lib onetbsd stand
CUSTOM GENERIC-LAPTOP libdata proc tmp
dev GENERIC-NOACPI libexec rescue usr
Edit /etc/fstab
Add the following line to your /etc/fstab
file:
/dev/hda8 /mnt/freebsd ufs ufstype=ufs2,ro 0 2
/dev/hda10 /mnt/netbsd ufs ufstype=44bsd,ro 0 2
Now you can mount the FFS partitions by typing:
# mount /mnt/freebsd
# mount /mnt/netbsd
and verify with:
$ mount
[...]
/dev/hda8 on /mnt/freebsd type ufs (ro,ufstype=ufs2)
/dev/hda10 on /mnt/netbsd type ufs (ro,ufstype=44bsd)
[...]
Write support
Write support is available given several conditions are satisfied: - ufs write support option compiled in Linux kernel (CONFIG_UFS_FS_WRITE=y
): it is disabled by default. - FFSv1 filesystem (FFSv2 not yet supported)
Please note that as I do not really need write support on NetBSD partitions from GNU/Linux, I did not bother to rebuild my Linux kernel and hence have not tested this feature.
Remarks
If you forget the
ro
option, you will get the following message at dmesg:$ dmesg | grep ufs ufs was compiled with read-only support, can't be mounted as read-write
If you forget to set the
ufstype
option, you will get the following message at dmesg:$ dmesg | grep ufstype mount -t ufs -o ufstype=sun|sunx86|44bsd|ufs2|5xbsd|old|hp|nextstep|nextstep-cd|openstep ...
WARNING<<< Wrong ufstype may corrupt your filesystem, default is ufstype=old
So, extra care should be taken.
People have reported crashes using FFS partitions access under GNU/Linux (even in read-only mode, that is very strange). I am half convinced that has been caused by accessing a whole BSD slice (BSD dedicated BIOS partition) instead of a BSD partition.
Why use PXELINUX
You may have a running Linux server supporting network boot via PXELINUX, because it allows a nice selection menu of bootable images, and want to add a NetBSD kernel to the menu.
In an experimental environment, this would allow to boot diagnostic tools like memtest, Linux kernels like GRML, a variety of Linux installers, and also to install NetBSD without removable devices.
However, if the networked machine is already dedicated to NetBSD, using pxeboot_ia32.bin directly to boot a NetBSD kernel for repairs or boot a diskless workstation with NetBSD would be better.
Using PXELINUX to chain boot
PXELINUX can not only boot Linux kernels, but also a boot sector. Thus, we can instruct PXELINUX to load the NetBSD netboot loader pxeboot_ia32.bin, which then can load a NetBSD kernel.
The only tricky thing is where to put which files. The two files needed are e.g. on a bootable CD-ROM.
- copy pxeboot_ia32.bin to the same directory where pxelinux.0 resides. The DHCP config file may contain filename="/lts/i386/pxelinux.0", then copy pxeboot_ia32.bin to e.g. /tftpboot/lts/i386/
- copy the kernel, e.g. netbsd-INSTALL.gz, to the directory denoted option root-path "/opt/ltsp/i386" in the DHCP config file, possibly renaming it to the default netbsd.
- Edit the pxelinux.cfg/default (or any other file there) e.g. like this
# NetBSD
label netbsd
kernel pxeboot_ia32.bin
In case of a NetBSD installation, proceed as in Example installation, using probably FTP, HTTP or NFS as installation source.
Remarks:
- The kernel(s) can and should stay gzipped
- Unless an installation kernel is used, it will try to NFS-mount the root-path from the DHCP config file as root and assume it is writable, i.e. try to be a diskless workstation, see link below
- For NetBSD 5, use NETBSD_INSTALL_FLOPPY.GZ as installation kernel
See also
Contents
Introduction
The standard kernel that you build for the NSLU2 requires that the Slug be modified to provide a serial port so that you can interact with the Slug during the boot process and to log in. This page will show you how to boot your Slug into NetBSD using only an ethernet connection. The strategy is to configure and build the kernel so that it automatically finds and mounts the root disk through DHCP and NFS without requiring that we type in the location of the root drive through the serial port. The root disk is a modification of the typical setup that allows an insecure telnet login to the NetBSD kernel running on the NSLU2. Once logged in, you can set up a username and password, enable ssh, and close out the insecure telnet connection.
The command line instructions that follow are for a Linux system using bash. They should be pretty much the same for another *nix system, except for the differences due to the shell.
Get the source code
To get the source code (current):
$ mkdir ~/net
$ export CVS_RSH="ssh"
$ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
$ cd ~/net
$ cvs checkout -A -P src
*/!\ The CVS address didnt work for me. I used: export CVSROOT="anoncvs@anoncvs.se.NetBSD.org:/cvsroot" /!*
This will create a directory ~/net/src with the source tree in it.
See the section below (Versions that are known to work) to get an older version of NetBSD-current that should build and boot correctly. This section also has an update for changes to build.sh.
Build the kernel
Note: The procedure shown here works for versions of NetBSD before about August 5, 2008. See the section below (Versions that are known to work) for changes to the three build.sh command lines.
First, build the tools.
$ cd ~/net/src
$ ./build.sh -m evbarm -a armeb tools
You need to get the Intel proprietary firmware for the NSLU2 ethernet controller. Follow the instructions at this location: <~/net/src/sys/arch/arm/xscale/ixp425-fw.README.> After building the firmware as described in the readme file, copy the file "IxNPEMicrocode.dat" to the directory ~/net/src/sys/arch/arm/xscale (the same directory as the README).
NOTE: Versions 3.0 and later of the Intel firmware don't work. Use version 2.3.2 or 2.4.
We'll build three versions of the kernel, though only one will be used here. The other two kernels will be used when we move from mounting NetBSD using an NFS root to a disk drive connected to one of the USB ports. Add a configuration file, NSLU2_ALL, with the configuration for the three kernels. This file goes in the same directory as the standard NSLU2 configuration file. Note that we're including the standard file in our configuration file.
$ cd ~/net/src/sys/arch/evbarm/conf
$ echo 'include "arch/evbarm/conf/NSLU2"' >NSLU2_ALL
$ echo 'config netbsd-nfs root on npe0 type nfs' >>NSLU2_ALL
$ echo 'config netbsd-sd0 root on sd0a type ffs' >>NSLU2_ALL
$ echo 'config netbsd-sd1 root on sd1a type ffs' >>NSLU2_ALL
$ cat NSLU2_ALL
include "arch/evbarm/conf/NSLU2"
config netbsd-nfs root on npe0 type nfs
config netbsd-sd0 root on sd0a type ffs
config netbsd-sd1 root on sd1a type ffs
Now build everything. (Much appreciation to Iain Hibbert for his help in understanding build.sh and its numerous configuration variables!)
$ cd ~/net/src
$ ./build.sh -u -U -m evbarm -a armeb build
$ ./build.sh -u -U -m evbarm -a armeb -V ALL_KERNELS=NSLU2_ALL release
When finished, you'll find all of the necessary files in ~/net/src/obj/releasedir/evbarm/binary/sets.
$ ls -la ~/net/src/obj/releasedir/evbarm/binary/sets/
total 116200
drwxr-xr-x 2 owner owner 4096 2008-03-07 10:22 .
drwxr-xr-x 5 owner owner 4096 2008-03-07 10:19 ..
-rw-rw-r-- 1 owner owner 24189627 2008-03-07 10:21 base.tgz
-rw-rw-r-- 1 owner owner 272 2008-03-07 10:22 BSDSUM
-rw-rw-r-- 1 owner owner 366 2008-03-07 10:22 CKSUM
-rw-rw-r-- 1 owner owner 33559987 2008-03-07 10:21 comp.tgz
-rw-rw-r-- 1 owner owner 369051 2008-03-07 10:21 etc.tgz
-rw-rw-r-- 1 owner owner 3233648 2008-03-07 10:21 games.tgz
-rw-rw-r-- 1 owner owner 25920807 2008-03-07 10:19 kern-ADI_BRH.tgz
-rw-rw-r-- 1 owner owner 1267036 2008-03-07 10:19 kern-IXM1200.tgz
-rw-rw-r-- 1 owner owner 8615170 2008-03-07 10:19 kern-NSLU2_ALL.tgz
-rw-rw-r-- 1 owner owner 2872331 2008-03-07 10:19 kern-NSLU2.tgz
-rw-rw-r-- 1 owner owner 8455973 2008-03-07 10:22 man.tgz
-rw-rw-r-- 1 owner owner 632 2008-03-07 10:22 MD5
-rw-rw-r-- 1 owner owner 3318310 2008-03-07 10:22 misc.tgz
-rw-rw-r-- 1 owner owner 1820 2008-03-07 10:22 SHA512
-rw-rw-r-- 1 owner owner 274 2008-03-07 10:22 SYSVSUM
-rw-rw-r-- 1 owner owner 3815709 2008-03-07 10:22 tests.tgz
-rw-rw-r-- 1 owner owner 3073516 2008-03-07 10:22 text.tgz
The kern-ADI_BRH.tgz and kern-IXM1200.tgz are kernels for other ARM boards that are built automatically when you specify -m evbarm. We won't use them.
Set up the NFS file system
To setup the NFS server, see http://www.netbsd.org/docs/network/netboot/nfs.html and http://www.netbsd.org/docs/network/netboot/files.html. NOTEThe instructions that follow are written for system configurations where the NFS/tftp/DHCP server is a different machine than your build machine. If they are the same system, then ignore the ssh and scp commands.
Log into the nfs server using ssh, and setup the NetBSD file structure:
$ sudo mkdir -p /export/client/root/dev
$ sudo mkdir /export/client/home
$ sudo touch /export/client/swap
$ sudo dd if=/dev/zero of=/export/client/swap bs=4k count=4k
$ sudo chmod 600 /export/client/swap
$ sudo mkdir /export/client/root/swap
Copy the necessary files from the build machine to the NFS system. On your build machine:
$ cd ~/net/src/obj
$ scp -r releasedir/evbarm/binary/sets nfsserver:/export/client
Build the NetBSD root file system on the NFS server:
$ cd /export/client/root
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/base.tgz
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/comp.tgz
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/etc.tgz
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/games.tgz
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/man.tgz
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/misc.tgz
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/tests.tgz
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/text.tgz
The next part is slightly more complicated for the two-system configuration. To make the /dev directory for the NFS file system, mount the NFS file system on your build machine:
$ sudo mount -t nfs nfsserver:/export/client/root /mnt/root
$ cd /mnt/root/dev
Then execute the MAKEDEV shell script from your build machine. Skip the mounting part if you use only one system.
$ sudo sh ./MAKEDEV -m ~/net/src/obj/tooldir.YOUR.SYSTEM.HERE/bin/nbmknod all
Save the original NetBSD /etc directory, just in case you would like to refer to it later:
$ sudo cp -r /export/client/root/etc /export/client/root/orig.etc
$ cd /export/client/root/etc
Setup the various files in the exported etc so that the system will boot up and allow logins via telnet. The edit command is shown, along with the final file configuration, except for inetd.conf which is very long.
/export/client/root/etc/hosts
$ sudo nano hosts
# $NetBSD: how_to_install_netbsd_on_the_linksys_nslu2___40__slug__41___without_a_serial_port__44___using_nfs_and_telnet.mdwn,v 1.3 2019/04/08 13:40:19 sevan Exp $
#
# Host Database
# This file should contain the addresses and aliases
# for local hosts that share this file.
# It is used only for "ifconfig" and other operations
# before the nameserver is started.
#
#
::1 localhost localhost.
127.0.0.1 localhost localhost.
#
# RFC 1918 specifies that these networks are "internal".
# 10.0.0.0 10.255.255.255
# 172.16.0.0 172.31.255.255
# 192.168.0.0 192.168.255.255
192.168.1.102 nfsserver # my NFS server
192.168.1.240 slug1 # my NSLU2
/export/client/root/etc/fstab
$ sudo nano fstab
#/etc/fstab
nfsserver:/client/swap none swap sw,nfsmntpt=/swap
nfsserver:/client/root / nfs rw 0 0
/export/client/root/etc/ifconfig.npe0
$ sudo nano ifconfig.npe0
inet client netmask 255.255.255.0 broadcast 192.168.1.255
/export/client/root/etc/inetd.conf
$ sudo nano inetd.conf
Change the two lines:
#telnet stream tcp nowait root /usr/libexec/telnetd telnetd -a valid
#telnet stream tcp6 nowait root /usr/libexec/telnetd telnetd -a valid
to
telnet stream tcp nowait root /usr/libexec/telnetd telnetd
telnet stream tcp6 nowait root /usr/libexec/telnetd telnetd
/export/client/root/etc/rc.conf
$ sudo nano rc.conf
# $NetBSD: how_to_install_netbsd_on_the_linksys_nslu2___40__slug__41___without_a_serial_port__44___using_nfs_and_telnet.mdwn,v 1.3 2019/04/08 13:40:19 sevan Exp $
#
# see rc.conf(5) for more information.
#
# Use program=YES to enable program, NO to disable it. program_flags are
# passed to the program on the command line.
#
# Load the defaults in from /etc/defaults/rc.conf (if it's readable).
# These can be overridden below.
#
if [ -r /etc/defaults/rc.conf ]; then
. /etc/defaults/rc.conf
fi
# If this is not set to YES, the system will drop into single-user mode.
#
rc_configured=YES
# Add local overrides below
#
sshd=YES
hostname="slug1"
defaultroute="192.168.1.1"
nfs_client=YES
auto_ifconfig=NO
net_interfaces=""
/export/client/root/etc/ttys
$ sudo nano ttys
# $NetBSD: how_to_install_netbsd_on_the_linksys_nslu2___40__slug__41___without_a_serial_port__44___using_nfs_and_telnet.mdwn,v 1.3 2019/04/08 13:40:19 sevan Exp $
#
# from: @(#)ttys 5.1 (Berkeley) 4/17/89
#
# name getty type status comments
#
console "/usr/libexec/getty default" vt100 on secure
ttyp0 "/usr/libexec/getty Pc" vt100 off secure
ttyE0 "/usr/libexec/getty Pc" vt220 off secure
ttyE1 "/usr/libexec/getty Pc" vt220 off secure
ttyE2 "/usr/libexec/getty Pc" vt220 off secure
ttyE3 "/usr/libexec/getty Pc" vt220 off secure
tty00 "/usr/libexec/getty default" unknown off secure
tty01 "/usr/libexec/getty default" unknown off secure
tty02 "/usr/libexec/getty default" unknown off secure
tty03 "/usr/libexec/getty default" unknown off secure
tty04 "/usr/libexec/getty default" unknown off secure
tty05 "/usr/libexec/getty default" unknown off secure
tty06 "/usr/libexec/getty default" unknown off secure
tty07 "/usr/libexec/getty default" unknown off secure
Setup the tftp, NFS, and DHCP servers
Now, setup tftp, NFS, and DHCP. On my Fedora 7 system, I use the following setup. Please note that the files shown below are believed to be correct, but if tftp, NFS, or DHCP isn't working for you, try Googling howto. Also, check your SELinux settings mine were preventing the slug from attaching to the NFS files. Remember - the kern-NSLU2_ALL.tgz was copied to the NFS server /export/client directory in the previous section. (NB: It wouldn't be a bad idea for somebody to add the settings required for a NetBSD system here.) As in the previous section, edit or create the files as necessary using nano or your favorite text editor.
$ cd /export/client/root
$ sudo tar --numeric-owner -xvpzf /export/client/releasedir/evbarm/binary/sets/kern-NSLU2_ALL.tgz
$ cp *.bin /tftpboot
$ sudo chmod 666 /tftpboot/*bin
/etc/hosts
192.168.1.240 slug1
192.168.0.1 redboot
192.168.1.102 your_host_name #use your host address
/etc/hosts.allow
in.tftpd: 192.168.0.1
rpcbind: 192.168.1.240
lockd: 192.168.1.240
rquotad: 192.168.1.240
mountd: 192.168.1.240
statd: 192.168.1.240
/etc/xinetd.d/tftp
service tftp
{
disable = no
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /tftpboot
per_source = 11
cps = 100 2
flags = IPv4
}
/etc/dhcpd.conf
ddns-update-style ad-hoc;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
option domain-name-servers xxx.xxx.xxx.xxx; #Use your nameserver address
default-lease-time 2592000;
allow bootp;
allow booting;
#option ip-forwarding false; # No IP forwarding
#option mask-supplier false; # Don't respond to ICMP Mask req
subnet 192.168.1.0 netmask 255.255.255.0 {
option routers 192.168.1.1;
range 192.168.1.110 192.168.1.189;
}
group {
next-server 192.168.1.102; # IP address of your TFTP server
option routers 192.168.1.1;
default-lease-time 2592000;
host slug1 {
hardware ethernet 00:18:39:a2:26:7c;
fixed-address 192.168.1.240;
option root-path "/client/root";
}
}
/etc/exports
/export/client/root 192.168.1.0/255.255.255.0(rw,sync,no_root_squash) /export/client/swap 192.168.1.0/255.255.255.0(rw,sync,no_root_squash)
Interrupting Slug bootup using telnet
Unless we modify the flash memory of the Slug, the normal boot process is to load RedBoot, wait a few seconds, then load a kernel and memory disk image from flash and execute it. This process can be interrupted if you install a serial port by typing a ^C within two seconds after seeing the following message appear on the serial port screen:
RedBoot(tm) bootstrap and debug environment [ROMRAM]
Red Hat certified release, version 1.92 - built 15:16:07, Feb 3 2004
Platform: IXDP425 Development Platform (XScale)
Copyright (C) 2000, 2001, 2002, Red Hat, Inc.
RAM: 0x00000000-0x02000000, 0x000723a0-0x01ff3000 available
FLASH: 0x50000000 - 0x50800000, 64 blocks of 0x00020000 bytes each.
== Executing boot script in 2.000 seconds - enter ^C to abort
The good people at www.nslu2-linux.org have also documented a way to do the same thing with telnet. This means you can use telnet to interrupt the boot process and instruct the Slug to load an executable using tftp. Of course, the Slug will still revert back to the serial port as the login console, but the changes we made above will also allow you to login in as root using telnet. Note that this process requires two different telnet sessions, even though they are to the same device (in general, they will use two different IP addresses), since one is to RedBoot and the second will be to NetBSD.
There are several methods described for using telnet to interrupt RedBoot, which you can find at http://www.nslu2-linux.org/wiki/HowTo/TelnetIntoRedBoot. My personal preference is the one near the bottom of the web page entitled C program using Berkeley Sockets. Just copy the source code from the web page, paste it into a file (telnet_slug.c) using any editor, and compile. I had to add two header files, string.h and stdlib.h, to get the program to compile using Fedora 8 and gcc 4.1.2. The first three lines in my file look like:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
Then,
$ gcc telnet_slug.c -o telnet_slug
to compile the program. You also need to set your network configuration so that your computer can respond to the Slug when it starts up with IP address 192.168.0.1. For my Fedora 8 system, I use the following:
$ sudo /sbin/ifconfig eth0:1 inet 192.168.0.2 broadcast 192.168.0.255 netmask 255.255.255.0
You'll find other suggestions for other operating systems at the top of the aforementioned web page. Now, run the program before you power on the Slug and you should see:
$ ./telnet_slug
== Executing boot script in 1.950 seconds - enter ^C to abort
Telnet escape character is '~'.
Trying 192.168.0.1...
Connected to 192.168.0.1.
Escape character is '~'.
RedBoot>
What could be easier? Occasionally, the system trying to telnet into RedBoot will be a little too slow, so if you don't see anything happening after a minute or so, try again. For reference, my Slug with the clock speed-up modification, takes 12 seconds from power on to the RedBoot prompt. Presumably, it will take 24 seconds if you have an older Slug without the modification.
Booting the Slug with NFS
Now, start up the NSLU2 and interrupt the boot process as described above:
$ ./telnet_slug
== Executing boot script in 1.960 seconds - enter ^C to abort
Telnet escape character is '~'.
Trying 192.168.0.1...
Connected to 192.168.0.1.
Escape character is '~'.
RedBoot> ip_address -h 192.168.0.2
IP: 192.168.0.1/255.255.255.0, Gateway: 192.168.0.1
Default server: 192.168.0.2, DNS server IP: 0.0.0.0
RedBoot> load -r -b 0x200000 netbsd-nfs.bin
Using default protocol (TFTP)
Raw file loaded 0x00200000-0x004a2ba7, assumed entry at 0x00200000
RedBoot> g
~
telnet> q
Connection closed.
$ telnet slug1
Trying 192.168.1.240...
Connected to slug1.
Escape character is '^]'.
NetBSD/evbarm (slug1) (ttyp0)
login: root
Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
2006, 2007, 2008
The NetBSD Foundation, Inc. All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.
NetBSD 4.99.54 (NSLU2_NFS) #0: Fri Feb 15 23:04:29 EST 2008
Welcome to NetBSD!
This system is running a development snapshot of the NetBSD operating system,
also known as NetBSD-current. It is highly possible for it to contain serious
bugs, regressions, broken features or other problems. Please bear this in mind
and use the system with care.
You are encouraged to test this version as thoroughly as possible. Should you
encounter any problem, please report it back to the development team using the
send-pr(1) utility (requires a working MTA). If yours is not properly set up,
use the web interface at: http://www.NetBSD.org/support/send-pr.html
Thank you for helping us test and improve NetBSD.
We recommend creating a non-root account and using su(1) for root access.
slug1#
The first time I booted the Slug using NFS, it took several minutes to setup up files and such, so be patient.
Using sysinst to install NetBSD onto a USB drive
If you're new to NetBSD, you might feel more comfortable using the NetBSD installer to set up NetBSD on your USB thumb or hard disk. Everything you need was built during the kernel build steps above. The installer consists of five files; one is the executable and the other four are the installation messages in German, French, Spanish, and Polish. To use the installer, move it to the NFS server used above to boot up the NSLU2 and add the installation directory to the exported NFS directories in /etc/exports on the NFS server. Don't forget to update the exports list.
$ cd ~/net/src/distrib/evbarm/instkernel/ramdisk/obj/work/
$ scp sysinst* nfsserver:/export/client/
$ sudo nano /etc/exports
$ sudo /usr/sbin/exportfs -ra
NFS server's /etc/exports:
/export/client 192.168.1.0/255.255.255.0(rw,sync,no_root_squash)
/export/client/root 192.168.1.0/255.255.255.0(rw,sync,no_root_squash)
/export/client/swap 192.168.1.0/255.255.255.0(rw,sync,no_root_squash)
Boot up the slug as before, and mount the newly exported directory.
slug1# mkdir /mnt/inst
slug1# mount -t nfs nfsserver:/export/client /mnt/inst
slug1# ls -la /mnt/inst
total 18064
drwxrwxrwx 6 root wheel 4096 Mar 7 19:41 .
drwxr-xr-x 5 root wheel 4096 Mar 7 19:52 ..
drwxr-xr-x 2 root wheel 4096 Feb 26 21:53 home
drwxr-xr-x 3 500 501 4096 Mar 7 16:03 releasedir
drwxr-xr-x 20 root wheel 4096 Mar 7 16:05 root
-rw------- 1 root wheel 16777216 Feb 28 03:15 swap
-r-xr-xr-x 1 500 501 1571140 Mar 7 19:41 sysinst
-r--r--r-- 1 500 501 23994 Mar 7 19:41 sysinstmsgs.de
-r--r--r-- 1 500 501 23727 Mar 7 19:41 sysinstmsgs.es
-r--r--r-- 1 500 501 23785 Mar 7 19:41 sysinstmsgs.fr
-r--r--r-- 1 500 501 21140 Mar 7 19:41 sysinstmsgs.pl
drwxr-xr-x 2 root wheel 4096 Feb 26 21:53 usr
Make sure that NetBSD recognizes your USB drive.
slug1# dmesg | grep sd
sd0 at scsibus0 target 0 lun 0: <SanDisk, U3 Cruzer Micro, 3.21> disk removable
sd0: 3919 MB, 7964 cyl, 16 head, 63 sec, 512 bytes/sect x 8027793 sectors
Change to the directory with the installer and run it. You'll see the following messages on your telnet terminal.
slug1# cd /mnt/inst
slug1# ./sysinst
Welcome to sysinst, the NetBSD-4.99.55 system installation tool. This menu-driven tool is designed
to help you install NetBSD to a hard disk, or upgrade an existing NetBSD system, with a minimum of work.
[...snip (Select your language preference)...]
+-----------------------------------------------+
¦ NetBSD-4.99.55 Install System ¦
¦ ¦
¦ ¦
¦>a: Install NetBSD to hard disk ¦
¦ b: Upgrade NetBSD on a hard disk ¦
¦ c: Re-install sets or install additional sets ¦
¦ d: Reboot the computer ¦
¦ e: Utility menu ¦
¦ x: Exit Install System ¦
+-----------------------------------------------+
First, select the utility menu option. From here, set up your network so that the installation program can generate the necessary files in /etc. When done, return to the install system menu and select the "Install NetBSD to hard disk" option and follow the instructions. When you get to the third menu, select "Custom installation". Then mark the following items Yes:
The following is the list of distribution sets that will be used.
Distribution set Selected
------------------------ --------
a: Kernel (ADI_BRH) No
b: Kernel (INTERGRATOR) No
c: Kernel (IQ80310) No
d: Kernel (IQ80321) No
e: Kernel (TEAMASA_NPWR) No
f: Kernel (TS7200) No
g: Base Yes
h: System (/etc) Yes
i: Compiler Tools Yes
j: Games No
k: Online Manual Pages Yes
l: Miscellaneous Yes
m: Test programs Yes
n: Text Processing Tools Yes
o: X11 sets None
>x: Install selected sets
Continue with the normal installation. When asked for the location of the distribution files, select Local Directory.
Your disk is now ready for installing the kernel and the distribution sets.
As noted in your INSTALL notes, you have several options. For ftp or nfs,
you must be connected to a network with access to the proper machines.
Sets selected 7, processed 0, Next set base.
+-------------------------+
¦ Install from ¦
¦ ¦
¦ a: CD-ROM / DVD ¦
¦ b: FTP ¦
¦ c: HTTP ¦
¦ d: NFS ¦
¦ e: Floppy ¦
¦ f: Unmounted fs ¦
¦>g: Local directory ¦
¦ h: Skip set ¦
¦ i: Skip set group ¦
¦ j: Abandon installation ¦
+-------------------------+
Next screen:
Enter the already-mounted local directory where the distribution is located.
Remember, the directory should contain the .tgz files.
>a: Base directory /mnt/inst/releasedir
b: Set directory /evbarm/binary/sets
x: Continue
Make sure you enter a password for the root user, since we are going to use secure shell to login to the NSLU2. Once the installation is finished, NetBSD checks the file system to see if everything looks OK. Since we didn't install a kernel to the disk, NetBSD will think that the installation is incomplete. You can ignore the warning message. Exit the installation, which unmounts the USB disk, and remount the disk. Edit the two files below, then reboot the NSLU2.
slug1# mkdir /mnt/d0
slug1# mount /dev/sd0a /mnt/d0
slug1# cd /mnt/d0/etc
slug1# vi rc.conf
slug1# vi ssh/sshd_config
slug1# reboot
Slug's rc.conf:
#
# see rc.conf(5) for more information.
#
# Use program=YES to enable program, NO to disable it. program_flags are
# passed to the program on the command line.
#
# Load the defaults in from /etc/defaults/rc.conf (if it's readable).
# These can be overridden below.
#
if [ -r /etc/defaults/rc.conf ]; then
. /etc/defaults/rc.conf
fi
# If this is not set to YES, the system will drop into single-user mode.
#
rc_configured=YES
# Add local overrides below
#
hostname=slug1.
defaultroute="192.168.1.1"
sshd=YES
Slug's ssh/sshd_config:
# $NetBSD: how_to_install_netbsd_on_the_linksys_nslu2___40__slug__41___without_a_serial_port__44___using_nfs_and_telnet.mdwn,v 1.3 2019/04/08 13:40:19 sevan Exp $
# $OpenBSD: sshd_config,v 1.75 2007/03/19 01:01:29 djm Exp $
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options change a
# default value.
[...snip...]
PermitRootLogin yes
[...snip...]
After rebooting, use the method described above for interrrupting the boot process with telnet, assign the host ip address and use tftp to load the kernel netbsd-sd0.bin, which uses sd0a as the root drive. Remember, this is one of the three kernels we built earlier. You should be able to ssh to your Slug and login as root.
$ ./telnet_slug
== Executing boot script in 1.640 seconds - enter ^C to abort
Telnet escape character is '~'.
Trying 192.168.0.1...
Connected to 192.168.0.1.
Escape character is '~'.
RedBoot> ip_address -h 192.168.0.2
IP: 192.168.0.1/255.255.255.0, Gateway: 192.168.0.1
Default server: 192.168.0.2, DNS server IP: 0.0.0.0
RedBoot> load -r -b 0x200000 netbsd-sd0.bin
Using default protocol (TFTP)
Raw file loaded 0x00200000-0x004a2b9f, assumed entry at 0x00200000
RedBoot> g
~
telnet> q
Connection closed.
$ ssh root@slug1
Password:
Last login: Thu Mar 20 21:51:38 2008 from 192.168.1.105
NetBSD 4.99.55 (NSLU2_ALL) #0: Sat Mar 8 11:33:58 EST 2008
Welcome to NetBSD!
[...snip...]
Terminal type is xterm.
We recommend creating a non-root account and using su(1) for root access.
slug1#
Troubleshooting
Can't format USB drive with sysinstall
On occasion, I've had trouble with sysinst failing to install to the USB drive. The symptom(s) seen most often is the slug hanging up when formatting the disk or untarring the distribution tarballs. I've had some success deleting the disklabel for the USB drive and then restarting sysinst. To do this (reference: To clear the disklabels), exit the install system and enter at the root prompt:
dd if=/dev/zero of=/dev/sd0c bs=8k count=1
Then, restart the installer and try again. Of course, make sure you use the device that corresponds to the USB drive you want to work on.
Build error with some versions of Linux
With certain version of Linux (notably Fedora 7), I get an error that looks like:
checking for i686-pc-linux-gnu-gcc... cc
checking for C compiler default output file name... configure: error: C compiler cannot create executables
See `config.log' for more details.
nbgmake: *** [configure-gcc] Error 1
*** Failed target: .build_done
(more error output)
Try defining the following two variables, then follow the build instructions above:
$ export HOST_CC=/usr/bin/gcc
$ export HOST_CXX=/usr/bin/g++
You can find additional suggestions in ~/net/src/BUILDING.
Versions that are known to work
Since, at the time of this writing, you must use -current to get a version of NetBSD that will run on the Slug, you will occasionally find that the kernel doesn't boot quite right. Don't complain - that's what -current is for. Here, we'll try to keep track of the latest version of NetBSD that is known to build and boot correctly. You can get these versions by changing the CVS command line. To get an older version of NetBSD-current use:
$ export CVS_RSH="ssh"
$ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
$ cd ~/net
$ cvs checkout -D 20080420-UTC src
The script build.sh has changed since this article was first written. The following worked on August 23, 2008:
export CVS_RSH="ssh"
export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
cvs checkout -D 20080821-UTC src
Note: A checkout date of 20081215-UTC built and ran correctly. Note: There have been some reported problems with the 20081215 build.
Get the NPE code from Intel, as above. Setup the kernel configuration files as above. Then, build as follows:
./build.sh -O ../obj -T ../tools -m evbarm-eb tools ./build.sh -O ../obj -T ../tools -U -u -m evbarm-eb distribution ./build.sh -O ../obj -T ../tools -U -u -m evbarm-eb -V KERNEL_SETS=NSLU2_ALL release
The files you'll need are now in ~/net/obj/releasedir/evbarm/binary/sets.
To add USB audio support
Create the kernel configuration file:
$ echo 'include "arch/evbarm/conf/NSLU2"' >NSLU2_AUDIO
$ echo 'uaudio* at uhub? port ? configuration ?' >>NSLU2_AUDIO
$ echo 'audio* at uaudio?' >>NSLU2_AUDIO
$ echo 'config netbsd-aud-npe0 root on npe0 type nfs' >>NSLU2_AUDIO
$ echo 'config netbsd-aud-sd0 root on sd0a type ffs' >>NSLU2_AUDIO
$ echo 'config netbsd-aud-sd1 root on sd1a type ffs' >>NSLU2_AUDIO
Build as described in the above link, except change the final build command line to:
$ ./build.sh -u -U -m evbarm -a armeb -V KERNEL_SETS=NSLU2_AUDIO release
Boot the kernel
Use tftp to load the kernel as described in the link above. You should see the following lines (or something similar) in your dmesg or console output. The order of the lines may be slightly different. You may not see anything about "uhub3" unless you have an external hub.
uhub0 at usb1: vendor 0x1033 OHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
uhub1 at usb2: vendor 0x1033 EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub1: 5 ports with 5 removable, self powered
uhub2 at usb0: vendor 0x1033 OHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub2: 3 ports with 3 removable, self powered
ehci0: handing over full speed device on port 1 to ohci0
uaudio0 at uhub2 port 1 configuration 1 interface 0: C-Media INC. USB Sound Device, rev 1.10/0.10, addr 2
uaudio0: audio rev 1.00
audio0 at uaudio0: full duplex, independent
uhub3 at uhub1 port 2: vendor 0x0409 product 0x005a, class 9/0, rev 2.00/1.00, addr 2
Note in particular the line that says "ehci0: handing over full speed device on port 1 to ohci0." The NetBSD USB ehci driver can not handle isochronous devices (required for audio), but the ohci driver can. Unfortunately, it also appears that NetBSD can not handle an attached hub with the ohci driver, so you can't plug the USB audio device into a hub - it must be plugged directly into one of the two USB ports on the back of the device.
Add the device entry in /dev
As root, enter the following:
# cd /dev
# ./MAKEDEV audio
You should now be able to play music to /dev/audio. If you want to play mp3 files, I recommend using madplay, which can be added using packages.
In this article I will explain, step by step, how to install NetBSD on a RAID-1 root disk.
Contents
Foreword
So, you just got yourself a pair of shiny new SCSI or SATA disks? You care about your data and you think backup is as important as having redundant data? You are disappointed because the netbsd raid1-root guide at 1 looks rather complicated? You want to setup a raid1 mirror during installation? This guide is losely based on the netbsd.org guide above, so check that out as well to get a better understanding of what's happening here.
Requirements
NetBSD 5.0 and 2 free harddisks
Booting from the CD
Boot the NetBSD 5.0 CD and, after selecting your language and keyboard layout, navigate to
e: Utility menu
and then
a: Run /bin/sh
Now make sure you know your disk names, you can either see them during boot (you can get those messages again by typing dmesg at the shell prompt you just opened) or use something like sysctl hw.disknames. I'll use wd0 and wd01 (both as whole disk) in this guide.
fdisk
Let's interactively setup the first partition on both disks with fdisk and mark the active partition as well (bold text is what you should enter, otherwise just accept the default by pressing return):
fdisk -0ua wd0
fdisk: primary partition table invalid, no magic in sector 0
Disk: /dev/rwd0d
NetBSD disklabel disk geometry:
cylinders: 16645, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
total sectors: 16778160
BIOS disk geometry:
cylinders: 1024, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
total sectors: 16778160
Do you want to change our idea of what the BIOS thinks? [n]
Partition 0:
The data for partition 0 is:
sysid: [0..255 default: 169]
start: [0..1044cyl default: 63, 0cyl, 0MB]
size: [0..144cyl default: 16778097, 1044cyl, 8192MB]
bootmenu: []
Do you want to change the active partition? [n] y
Choosing 4 will make no partition active.
active partition: [0..4 default: 0]
Are you happy with this choice? [n] y
We haven't written the MBR back to disk yet. This is your last chance.
Partition table:
0: NetBSD (sysid 169)
start 63, size 33555249 (16384 MB, Cyls 0-2088/183/63), Active
1:
2:
3:
Bootselector disabled.
First active partition: 0
Should we write new partition table? [n] y
Do the same for wd1:
# fdisk -0ua wd1
... see above
disklabel
Now we need to edit the disklabel, it's easy, we just need to change one thing:
disklabel -i -I wd0
partition> P
...
e: 33555312 63 4.2BSD ...
partition> e
Filesystem type [?] [4.2BSD]: RAID
Start offset ('x' to start after partition 'x') [0.0625c, 63s, 0.0307617M]:
Partition size ('$' for all remaining) [16644.9c, 16778097s, 8192.43M]:
partition> W
Label disk [n]?y
Label written
partition> Q
Note that the letter e is only an example; you need to identify your root partition
Do the same with wd1
# disklabel -i -I wd1
...
You may save typing by copying the configuration:
# disklabel wd0 >/tmp/wd0.inf
# disklabel -R wd1 /tmp/wd0.inf
raid0.conf and raidctl
Next we'll configure RAIDframe (again, check the guide above, and the raidctl(8) manpage, what all this is about):
# cat > /tmp/raid0.conf
START array
1 2 0
START disks
/dev/wd0e
/dev/wd1e
START layout
128 1 1 1
START queue
fifo 100
^D
# raidctl -v -C /tmp/raid0.conf raid0
raid0: Component /dev/wd0e being configured at col: 0
Column: 0 Num Columns: 0
Version: 0 Serial Number: 0 Mod Counter: 0
Clean: No Status: 0
Number of columns do not match for /dev/wd0e
/dev/wd0e is not clean!
raid0: Component /dev/wd1e being configured at col: 1
Column: 0 Num Columns: 0
Version: 0 Serial Number: 0 Mod Counter: 0
Clean: No Status: 0
Number of columns do not match for /dev/wd1e
/dev/wd1e is not clean!
raid0: There were fatal errors
raid0: Fatal errors being ignored.
raid0: RAID Level 1
raid0: Components: /dev/wd0e /dev/wd1e
raid0: Total Sectors 16777984 (8192 MB)
# raidctl -v -I 2009072201 raid0
# raidctl -v -i raid0
Initiating re-write of parity
Parity Re-write status:
98% |***************************************| ETA: 00:01 -
# raidctl -v -A root raid0
raid0: New autoconfig value is: 1
raid0: New rootpartition value is: 1
raid0: Autoconfigure: Yes
raid0: Root: Yes
installboot
One last thing, we need to make both disks bootable:
# installboot -o timeout=30 -v /dev/rwd0e /usr/mdec/bootxx_ffsv1
File system: /dev/rwd0e
Primary bootstrap: /usr/mdec/bootxx_ffsv1
Ignoring PBR with invalid magic in sector 0 of '/dev/rwd0e'
Boot options: timeout 30, flags 0, speed 9600, ioaddr 0, console pc
# installboot -o timeout=30 -v /dev/rwd1e /usr/mdec/bootxx_ffsv1
...
Returning to sysinst
And that's it, return to sysinst and continue the installation of NetBSD, just select raid0 as the disk you want to install NetBSD to (and, of course, make sure you can boot from both disks after that):
# exit
This how-to describes how to install NetBSD/hpcsh on a Jornada 620LX. We will be using a 2 GB compact flash card and no swap.
The following is a basic recollection of the steps required to wrangle NetBSD 4.99.69 onto my 75MHz, 16MB/RAM HP Jornada 620LX. I know of at least one other 620LX with NetBSD on it, but that was 1.6; Most of the directions I found with regards to putting NetBSD on a Jornada were with the 720+ lines, which host a 200+MHz StrongARM CPU as well as 32-64+MB RAM.
Also, not having a serial cable for this (somewhat) rare Jornada I did the entire install through the in-ROM Windows CE and a CF Disk.
Contents
List of things necessary for this install
- An x86 machine capable of running NetBSD
- Your Jornada 620LX
- A ~>1GB CF Disk
- A CF Disk reader for x86 machine
The process over-simplified
- Install NetBSD on x86 and bring it up to -current
- Build tools/kernel/release for HPCSH on the x86 machine
- Partition (fdisk) & DiskLabel CF Disk
- Unpack release onto CF Disk
- Boot Jornada into CE and run HPCBoot.exe from CF Disk
- Enjoy NetBSD
The REAL breakdown
- Install onto a spare x86 machine, I'm not going to hand-hold through this install, as a basic install is perfectly fine.
In /usr/src, build the HPCSH(-3) tools:
$ cd /usr/src/ && ./build.sh -u -m hpcsh tools
Build the HPCSH(-3) kernel:
$ ./build.sh -u -m hpcsh kernel=GENERIC
Build the HPCSH(-3) release:
$ ./build.sh -u -m hpcsh -U release
NOTE: on building release I had it fail multiple times because I had not cleared out my /usr/src/../obj/ and my /usr/src/../tools/ and then rebuilt my tools for x86 after moving to -current.
- Attach the CFDisk to the NetBSD machine. Partition it into two partitions (I used a 2GB card and partitioned into 24MB and a 1.9GB).
- You can get away with using as little as a few MB, but I figured better safe than sorry with the extra space the 2GB card allots me.
- Note: Delete all partitions using fdisk before creating/editing these ones!
fdisk /dev/sd1
Do you want to change our idea of what BIOS thinks? [n] [enter]
Which partition do you want to change?: [none] 0
sysid: 1
start: 0
size: 24M
bootmenu [enter]
The bootselect code is not installed, do you want to install it now? [n] [enter]
Which partition do you want to change?: [none] 1
sysid: 169
start: (offset of partition 0's sectors)
size: (last sectors)
bootmenu [enter]
The bootselect code is not installed, do you want to install it now? [n] [enter]
Which partition do you want to change?: [none] [enter]
Update the bootcode from /usr/mdec/mbr? [n] [enter]
Should we write new partition table? [n] y
Now create filesystems on the two partitions:
newfs_msdos sd1e && newfs sd1a (your lettering here may differ)
Mount your filesystems so we can use them:
mount -o softdep /dev/sd1a /mnt && mount -o -l /dev/sd1e /mnt2
Copy your kernel and HPCBoot.exe to the msdos partition:
cd /usr/src/obj/releasedir/hpcsh/binary/kernel cp netbsd-GENERIC.gz /mnt2/netbsd.gz cd ../sets mv kern-GENERIC.tgz kern-GENERIC.tar.gz mv kern-HPW650PA.tgz kern-HPW650PA.tar.gz for tgz in *.tgz; do tar -xpvzf $tgz -C /mnt; done
This got me booting; however I hadn't set a root password anywhere! So make sure the first time to boot hpcboot.exe with the "single-user" checkbox, then mount / read-write and change the root password:
mount -u / mount /dev/wd0b on / type ffs (noatime, nodevmtime, local)
References
Original content from http://dayid.org/os/netbsd/doc/NetBSD-on-HPJornada620LX.html
This how-to describes how to install NetBSD/hpcarm on a Jornada 720. We will be using a 1 GB compact flash card and will configure swap on the card.
Contents
Disklabel the disk
We are going to create our partitions in such a way that disklabel will do all the work of calculating the sizes for us. Hook your flash card up to the computer and check where it attaches. On my machine it shows up as sd1, so that's what I will use.
Open up a terminal and lets begin. Use the -I to create first partition on disk.
# disklabel -i -I sd1
partition>
First we will create a DOS partition to store the kernel and the boot program. This partition is going to be at the beginning of the disk so it MUST have an offset of 63. That little gap is where the disklabel is stored, if you don't leave the offset, your disklabel will be overwritten when you try to format the disk.
partition> e
Filesystem type [?] [MSDOS]:
Start offset ('x' to start after partition 'x') [0.0307617c, 63s, 0.0307617M]:
Partition size ('$' for all remaining) : 6MB
partition>
We give it 6 MB as that is enough room for the boot program and 2 kernels.
Next we set up the swap partition, which is partition b by convention.
partition> b
Filesystem type [?] : swap
Start offset ('x' to start after partition 'x') : e
Partition size ('$' for all remaining) : 64MB
b: 131072 12351 swap # (Cyl. 6*- 70*)
partition>
We have told disklabel to start the swap partition after the DOS partition so we don't have to do any calculations.
Next is the main NetBSD partition a.
partition> a
Filesystem type [?] : 4.2BSD
Start offset ('x' to start after partition 'x') : b
Partition size ('$' for all remaining) : $
a: 1859473 143423 4.2BSD 1024 8192 45488 # (Cyl. 70*- 977*)
partition>
The last partition to set up is c. This partition represents the whole NetBSD portion of the disk, in our case that's partitions a and b.
partition> c
Filesystem type [?] : unused
Start offset ('x' to start after partition 'x') : e
Partition size ('$' for all remaining) : $
c: 1990545 12351 unused 0 0 # (Cyl. 6*- 977*)
partition>
You should now have:
partition> P
5 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 1859473 143423 4.2BSD 0 0 # (Cyl. 70*- 977*)
b: 131072 12351 swap # (Cyl. 6*- 70*)
c: 1990545 12351 unused 0 0 # (Cyl. 6*- 977*)
d: 2002896 0 unused 0 0 # (Cyl. 0 - 977*)
e: 12288 63 MSDOS # (Cyl. 0*- 6*)
partition>
If you're happy with the partition table then write it:
partition> W
Label disk [n]? y
Label written
partition>
Now you have labeled the disk, on to the next step.
Format the partitions
First, format the DOS partition:
# newfs_msdos sd1e
/dev/rsd1e: 12240 sectors in 1530 FAT12 clusters (4096 bytes/cluster)
MBR type: 1
bps=512 spc=8 res=1 nft=2 rde=512 sec=12288 mid=0xf8 spf=5 spt=32 hds=64 hid=63
Secondly, the NetBSD partition:
# newfs sd1a
/dev/rsd1a: 907.9MB (1859472 sectors) block size 8192, fragment size 1024
using 20 cylinder groups of 45.40MB, 5811 blks, 11264 inodes.
super-block backups (for fsck -b #) at:
32, 93008, 185984, 278960, 371936, 464912, 557888, 650864,
743840, 836816, 929792, 1022768, 1115744, 1208720, 1301696, 1394672,
1487648, 1580624, 1673600, 1766576, ...
Cross building hpcarm
Because of NetBSD's superb build framework, crosscompiling a ?hpcarm release is very easy.
First we need some where to store the ?hpcarm distribution and release. I use /usr/hpcarm/distribution and /usr/hpcarm/release, but anywhere will do.
Now tell build.sh where to store its build:
# export DESTDIR=/usr/hpcarm/distribution
# export RELEASEDIR=/usr/hpcarm/release
Start the build with:
# cd /usr/src
# ./build.sh -x -m hpcarm release
If you don't want X built drop the -x option.
Wait until it's finished and there should be a nice shiny new release in your release dir.
Now to install it on your flash card...
Install the release
Mount your flash card:
# mount -o softdep /dev/sd1a /mnt/jornada
Create a mount point for the DOS partition:
# mkdir /mnt/jornada/dos
Mount the DOS partition (IMPORTANT: use the -l option, see BUGS in mount_msdos(8)):
# mount -o -l /dev/sd1e /mnt/jornada/dos
Now we are ready to start installing.
Install the boot program:
# cp /usr/hpcarm/release/hpcarm/installation/hpcboot.exe /mnt/jornada/dos
Install the kernel:
# tar -x -p -z -f /usr/hpcarm/release/hpcarm/binary/sets/kern-JORNADA720.tgz -C /mnt/jornada/dos
Now extract the sets. This will extract all sets except the kernels.
# for f in /usr/hpcarm/release/hpcarm/binary/sets/[^k]*.tgz; do
> tar -x -p -z -f $f -C /mnt/jornada
> done
If you dont want the X sets, then just add x to the regular expression.
Configure the system
We are on the home straight, we just need to configure a few files and we are done.
Create the devices:
# cd /mnt/jornada/dev
# ./MAKEDEV all
Create a symlink to the kernel:
# cd /mnt/jornada
# ln -s dos/netbsd
Set up your fstab file:
# vi /mnt/jornada/etc/fstab
# cat /mnt/jornada/etc/fstab
/dev/wd0a / ffs rw,noatime,nodevmtime 1 1
/dev/wd0b none swap sw 0 0
/dev/wd0e /dos msdos -l,rw 0 0
Last but not least is to configure rc.conf.
Mine looks like:
# cat /mnt/jornada/etc/rc.conf
# $NetBSD: how_to_install_netbsd_on_hpcarm.mdwn,v 1.2 2012/02/05 07:14:36 schmonz Exp $
#
# see rc.conf(5) for more information.
#
# Use program=YES to enable program, NO to disable it. program_flags are
# passed to the program on the command line.
#
# Load the defaults in from /etc/defaults/rc.conf (if it's readable).
# These can be overridden below.
#
if [ -r /etc/defaults/rc.conf ]; then
. /etc/defaults/rc.conf
fi
# If this is not set to YES, the system will drop into single-user mode.
#
rc_configured=YES
wscons=YES
defaultroute="1.1.1.1"
hostname=someone.somwhere.co.uk
sshd=YES
rpcbind=YES
nfs_client=YES
lockd=YES
statd=YES
cron=NO
# Add local overrides below
#
If you are using DHCP, add dhclient=YES instead of the defaultroute and hostname.
Now you're ready to boot it up.
Boot it up
Unmount your partitions and put the card in the Jornada. Go to My Documents then up one level and into Storage Card. Run the hpcboot program. On the boot tab select HP Jornada 720 (Japanese), select \Storage Card\ as the path and netbsd as the kernel to boot. Set the root file system to wd. Press boot and enjoy.
After you've set up hpcboot and verified the settings by successfully booting NetBSD, you may skip all the WinCE setup while booting by hitting Win+E and using only return, space and arrow keys to navigate to hpcboot and start booting NetBSD.
Contents
Things you DO NOT want to do
- Do not try using i386 NetBSD before 4.0. It'll install, but the minute you put any kind of load on the machine (like compiling stuff from pkgsrc) the machine will die with a kernel panic. Most of what has been tried in this howto was done using the amd64 release 4.0.
- Do not try installing the amd64 port for the 3.1 release. The drivers are too old for this machine and you'll be lucky to get through the installation before it crashes.
- Do not install the NetBSD bootselect code when you are doing the installation. Answer "NO" to that question during installation. Otherwise you could wipe out the rEFIt loader along with any way to boot the machine. You'll either end up reloading OS X from scratch or trying to recover with a lot of black magic on the EFI shell.
- Do not try using LFS on a partition that has even a moderate amount of I/O going on it. Your machine will panic until your remove the LFS venom from it's veins. Stick with UFS1 + softdep at least until 4.0 comes out. (UFS2 + log should work on 5.0 and -current).
- Do not just compile the meta package for xorg without first setting your options in such a way that you'll get the i810 server installed properly or you'll waste a bunch of time recompiling later. UPDATE: this maybe fixed in pkgsrc-2008Q4 so maybe you can just set "X11_TYPE=modular" in /etc/mk.conf and compile all the meta xorg packages from pkgsrc/meta-pkgs/xorg-* . For recent versions of current, try setting X11FLAVOUR=Xorg and X11_TYPE=modular in /etc/mk.conf to get both native and package (modular) Xorg.
- Do not try to use XFree86 -- it will probably fail.
What works and what doesn't
Known to work
- SATA disk drivers
- Core2Duo processor (it still crashes, but the amd64 much less often than the i386 port)
- i810 Graphics (Intel 945GM chip) and the LCD panel (with some help from the "915resolution" application)
- NVIDIA GeForce graphics (with resent Xorg).
- Keyboard
- Touchpad (but no synaptic extensions, so it sucks)
- IEEE1394 interface (shows up, but I've never used it)
- USB 2.0/1.0
- CDRW/DVDRW
- Marvell Gigabit Ethernet
- Automatic (ACPI?) fan control (ie.. you won't fry your macbook installing NetBSD)
Known not to work
- Apple iSight Camera (Should work in 5.0 and -current post Nov 1. 2008). For now it's simply "not configured"
- Internal Atheros 802.11g interface. Might work in 5.0
- Internal Broadcom BCM43xx 802.11b/g interface (found in some models). Should work in a near future -- it is supported by DragonflyBSD and OpenBSD.
- The internal accelerometer.
- The infrared remote control (actually it might work via "xev" and a window manager like fluxbox)
- The external DVI (might work, with newer Xorg but broken at the moment)
- Sound (audio0 shows up but you can't really use it due typical netbsd azalia problems). ** UPDATE sound kind of works in 4.0 and probably fully in 5.0/-current however you'll need to jack with all the values you see from "mixerctl -a" before you'll hear anything. Start by Unmuting everything and maxing out all the values, then turn it down from there.
Gathering What you'll need
Before you get started, let's get straight what you have and what you'll need.
- A MacBook with a Core2Dual CPU (use i386 for the older CoreDuo Macbooks) (non-pro - I don't know if any of this works with the Pro model). I am totally unsure about the Macbook Pro. There wasn't one available to test as this document was assembled. UPDATE: Macbook Pro should work fine, but some of the hardware, for example graphics cards and internal 802.11 interfaces, might be completely different.
- Mac Mini's also seem to work with largely this advice.
- A -current version of NetBSD for the "amd64" architechture (i386 versions will crash under a small bit of load on the Macbook). I used 200705170002Z from ftp://nyftp.netbsd.org/pub/NetBSD-daily/HEAD/. I've noticed that netbsd-4 beta snapshots seem to work fine, as well (as long as they are amd64 architechture).
- A CDRW or CDR with the "boot.iso" file burned on it from the NetBSD distribution mentioned above. I used an FTP based install, so there was no need to burn the whole distribution onto my CDRW. It helps to have a local FTP, NFS, or HTTP server with the binary sets on them if you want a fast install process.
- A working, ethernet-based connection to the Internet (or localnet with cached install sets) or a known-good USB-based wifi adapter.
- A copy of rEFIt to use as a primary bootloader. Get it at http://refit.sourceforge.net If you already have MacOS X installed, I'd suggest getting the disk image (dmg) version. It has a GUI-based installer that works with MacOS.
Boot Loaders
A (U)EFI boot loader is required to transfer control to a target operating system at boot time. As of this writing, there are two boot loaders in use for NetBSD and Linux setups on Macintosh computer's: rEFIt, and a more recent fork of rEFIt named rEFInd.
Using rEFIt
First grab a copy of rEFIt from http://refit.sourceforge.net and install it. The DMG-based installer is one good option but you can also create a bootable CDROM that'll assist in installing it if for some reason you don't have OSX anymore. If you choose the CDROM installation method, you'll need to download the ISO image instead of the DMG from the refit site The rEFIt program is basically like an EFI-based GRUB or LILO which gives you a slick-looking bootloader that'll allow you to choose between your various Mac partitions and the OS's that live on them. It's still necessary to install a secondary bootloader for each individual OS. Because of limitations on ancient MSDOS partition tables you'll only be able to have 3 OS's on your machine (the EFI firmware takes up a partition, too).
While in OSX, open a terminal and do:
cd /efi/refit/ ; ./enable.sh
You may also wish to edit /efi/refit/refit.conf to uncomment the Legacy section in case you want NetBSD to boot automatically. You might also choose to change the rEFIt timeout to something less than 20 seconds.
Afterwards rEFIt comes up before the Apple boot loader and you have a nice set of colorful icons representing my partitions. Before I made space for NetBSD, I could see my OS X and Windows partitions and use rEFIt to boot either one. rEFIt must do some kind of BIOS emulation, because any kind of bootable CDROM also shows up in the rEFIt menu if it's in the drive as the system is powered on. You can boot any PC operating system, but very very few are going to get very far on this newfangled hardware. Fortunately, NetBSD -current amd64 can boot just fine.
Using rEFInd
TBD
Creating a new partition for NetBSD
There are two ways to do this. The GUI way and the CLI way. The GUI way is probably easier for beginners, but I'm guessing that if you want to install NetBSD -current on your MacBook you aren't a beginner. If you want to use the GUI, then you'll need to install bootcamp. For Mac OS X (10.4) you can download a beta-version of bootcamp for free from Apple's site. It's usually found here http://www.apple.com/macosx/bootcamp/ but it's in beta, so it could go away any time. If you haven't already ran the "boot camp assistant" you can use this tool to resize your OSX partition and split it up ala PartitionMagic. If you already have a Windows partition you've created with boot-camp, you won't be able to use Boot Camp Assistant again. You'll have to do it from a terminal.
Here is how:
First do this to get a list of your current partitions diskutil list
Now decide which partition you want to be your NetBSD parition, and do not pick the EFI partition (you need that). I choose to use partition 4 on my macbook since 1==EFI, 2==OSX, and 3==WinXP.
(assuming you have an 80Gb disk and you want )
sudo diskutil resizeVolume disk0s2 32G "Linux" <name of NetBSD volume> 21G "MS-DOS FAT32" <name of windows volume> 21G
This operation only changes your partition layout to include a 32GB OS X partition, 21GB Linux one and 21GB Windows partition. We'll change the Linux tag to NetBSD once we get into the NetBSD installer. The diskutil command above was shameless lifted from the Gentoo Linux wiki on installing their distro on the MacBook, but it works.
Installing NetBSD -current
As mentioned above you'll need a -current release of NetBSD for the AMD64 architecture (Intel licensed the AMD64 instructions and re-named them to hide their shame at having the Itanic rejected by Microsoft). If you use the i386 port, you can expect to have major problems; so don't say you weren't warned. I guess if you have a CoreDuo Macbook (not the Core2Duo) you might want to give i386 a shot, but that's not what this document intends to cover. Once you have the "boot.iso" from "amd64/installation/cdrom' properly burned onto a CDR then put it into your system's drive and restart. rEFIt will detect the bootable disk and there will be an icon with a little CD picture on it showing you the disk as a boot option. Go ahead and select it, then let it boot up. Do the installation as usual but remember do not install the mbr bootselect code!. NetBSD will automatically install it's stage2 loader on the partition you select and rEFIt will transfer control to that paritition when you select it from the menu (it'll show up automatically as rEFIt probes your partitions prior to showing the initial menu). Once you reboot there is more fun on the way. If you use the boot.iso file to create your CDROM and didn't put any of the tarball "sets" on the CDROM, you'll have to get them over the network via http or ftp. One option is to go to another, working, machine and write down the full path on the FTP site to the directory right before the 'amd64' directory. For example: ftp://nyftp.netbsd.org/pub/NetBSD-daily/HEAD/200705170002Z . Your milage my vary. Refer to the regular NetBSD handbook if you need help with the installation. There is nothing too special about it other than a little extra hassle if you use the network.
Installing on dk(4) wedges
If the install kernel (or the installed kernel, with different symptoms) was compiled with dk(4) support (and gpt-autodetection) -- that is if the kernel configuration included
options DKWEDGE_AUTODISCOVER
options DKWEDGE_METHOD_GPT
as the recent install kernels do, and you plan to have both OS X and NetBSD on the same disk, you have to do the installation by hand. Since you need GPT partitions for rEFIt and OS X, dk(4} wedges will be added for them. Since the disk can only be accessed via wedges once at least one wedge has been added, and sysinst(8) does not know about wedges, the installation will fail with a `device busy' when sysinst tries to newfs(8) the NetBSD partition(s).
Fortunately, it is not hard to do the installation by hand. The following example assumes that you are installing from a cd, that you want to have just OS X and NetBSD on the disk (a NetBSD-only installation is easy; other installation media or a third operating system work analogously), that you are installing amd64, and that OS X is already installed on an HFS+ partition. The example uses a 200GB disk, with roughly half for the EFI and HFS+ partitions for rEFIt and OS X, and roughly half for the NetBSD partitions. Please make sure that you understand the starting sectors and sizes in the examples below before you try to mimic them.
- Install rEFIt as described above.
- Decrease the size of the HFS+ partition using the graphical Disk Utility or the command-line diskutil(8) from OS X (the graphical interface is found from Applications -> Utilities -> Disk Utility in recent versions of OS X). It is easiest to leave empty space for NetBSD and not create a partition at this stage.
- Boot the installation cd. Exit sysinst (or choose `run /bin/sh').
Create the drvctl(4) device and enough [r]dk(4) devices:
cd /dev && sh MAKEDEV drvctl dk7
Mount a memory-mapped file system, untar at least the base set on it, and add the relevant directories to your paths:
mount_mfs -s512m swap /mnt mount -r -t cd9660 /dev/cd0a /mnt2 cd /mnt && tar xzpf /mnt2/amd64/binary/sets/base.tgz cd /dev ; tar cf - . | (cd /mnt/dev ; tar xvpf - ) chroot /mnt
Use gpt(8) (/sbin/gpt) to edit the GPT partition table. Assuming your disk is wd0,
gpt show wd0
should show something like
start size index contents 0 1 MBR 1 1 Pri GPT header 2 32 Pri GPT table 40 409600 1 GPT part - EFI System 409640 195360984 2 GPT part - Apple HFS ... 390721935 32 Sec GPT table 390721967 1 Sec GPT header
Add an FFS partition and a 4GB swap partition by:
gpt add -b 195770624 -i 3 -s 186562702 -t ffs wd0 gpt add -b 382333327 -i 4 -s 8388608 -t swap wd0 gpt label -i 3 -l "NetBSD-root" wd0 gpt label -i 4 -l "NetBSD-swap" wd0
Dynamically add the relevant wedges using dkctl(8). The output of gpt add should show you the needed parameters (you can use ffs and swap, respectively, for the types).
dkctl wd0 delwedge dk2 dkctl wd0 addwedge NetBSD-root 195770624 186562702 -t ufs dkctl wd0 addwedge NetBSD-swap 382333327 8388608 -t swap
Alternatively, you can reboot the installation cd, since the GPT partitions will be detected automatically.
Edit the MBR table using fdisk(8) (fdisk -u wd0). Once you are done, the MBR table should look something like
0: GPT Protective MBR (sysid 238) ... 1: Apple HFS (sysid 175) ... 2: NetBSD (sysid 169) ... 3: NetBSD (sysid 169 ) ...
with the same starting sections and sizes as in the GPT table.
Edit the disklabel (disklabel -i wd0). One you are done, it should look something like (as long as the start, size and type are correct, the rest is quite arbitrary)
a: 186562703 195770624 4.2BSD 0 0 0 b: 8388608 382333327 swap c: 186562703 195770624 unused 0 0 d: 390721968 0 unused 0 0 e: 409639 1 unknown f: 1 95360984 409640 HFS
(the rEFIt EFI partition is left unknown, since we do not want to risk messing it up).
newfs(8) the FFS filesystem (make sure you use the correct /dev/rdk) and mount it (the log option is, of course, optional):
umount /mnt newfs -O2 /dev/rdk2 mount -o log /dev/dk2 /mnt
untar(1) the sets (SETS is the list of sets you want to install -- at least base.tgz and kern-GENERIC.tgz):
cd /mnt2/amd64/binary/sets for F in $SETS; do (cd /mnt && tar xzpf - ) < $F; done
Edit /mnt/etc/fstab (to use vi, add /mnt/[usr/]bin and relevant libraries to your [ld] path, and export TERM=vt100). Make sure that you at least have
/dev/dk2 / ffs rw 1 1 /dev/dk3 none none sw 0 0 ptyfs /dev/pts ptyfs rw 0 0 kernfs /kern kernfs rw,noauto 0 0 procfs /proc procfs rw,noauto 0 0
Install the bootcode:
cp -p /mnt/usr/mdec/boot /mnt/usr/mdec/boot.cfg /mnt installboot -v /dev/rdk2 /mnt/usr/mdec/bootxx_ffsv2
Edit /mnt/boot.cfg if needed
Create the devices:
cd /mnt/dev && sh MAKEDEV all dk7
Reboot, and proceed with configuration.
Getting X to work
Note: The following description applies to older versions of --current (pre November 2008, pre 5.0). For resent versions, try setting X11FLAVOUR=Xorg in /etc/mk.conf when building a release, and try setting X11_TYPE=modular in /etc/mk.conf for building packages. The startup scripts (notably /etc/rc.d/xdm) look for X in /usr/X11R6, while Xorg resides under /usr/X11R7. To get xdm(1) working, add
command="/usr/X11R7/bin/xdm"
to /etc/rc.conf.d/xdm.
Getting X working on your Macbook is something of a non-trivial task. First of all, the default XFree86 code that comes with the -current distribution won't even recognize the PCI-ID of the video card (an Intel GMA950). You'll have to install the Xorg server. I did this by setting X11_TYPE=xorg in /etc/mk.conf and installing it from /usr/pkgsrc/meta-pkgs/xorg (you did install NetBSD's pkgsrc right?). One small problem is that the i810 driver does not build by default on the x86_64 (amd64) architecture. You need to set the PKG_DEFAULT_OPTIONS to include the xorg-server-i810 string. This can be done by simply typing export PKG_DEFAULT_OPTIONS=xorg-server-i810 before you do the make install command from the /usr/pkgsrc/meta-pkgs/xorg directory. Once you have Xorg installed (which will take a while to compile), you can go ahead and set it up. One method is to do an X -configure then copy the /root/xorg.conf.new file into /etc/X11 and edit it to your taste. You'll need to manually set the HorizSync and VertRefresh in the display section. The see the example xorg.conf for reference. Just for a review, let's enumerate the steps needed here:
For 5.0 (and -current 5.99.1 and later), you can instead set "X11FLAVOUR=Xorg" and build a release/distribution to get a working X server.
- Download the pkgsrc2007Q1 or newer tarball
Unpack it into /usr, e.g.:
cd /usr ; tar xzvf /tmp/pkgsrc-2007Q1.tar.gz
Set your X11 server type to be Xorg, e.g., as root::
echo "X11_TYPE=xorg" >>/etc/mk.conf"
Set your server build options so you get the i810 driver even though this is an x86_x64 machine. E.g.:
export PKG_DEFAULT_OPTIONS=xorg-server-i810
Allow the xorg-server-i810 to be built by editing /usr/pkgsrc/x11/xorg-server/options.mk and adding xorg-server-i810 to the end of the _COMMONCARDDRIVERS list and removing it from the _NOTX86_64CARDDRIVERS list.
Build xorg, e.g.:
cd /usr/pkgsrc/meta-pkgs/xorg ; make install
Move the old XFree86 tree out of the way and link xorg in it's place, e.g.:
cd /usr/ ; mv X11R6 xfree86.X11R6 ; ln -s /usr/pkg/xorg /usr/X11R6
Create a skeleton xorg.conf file, e.g. as root:
X --configure
Copy the skeleton file into place and edit it, e.g.:
cp /root/xorg.conf.new /etc/X11/xorg.conf vi /etc/X11/xorg.conf
During your editing / customization make sure to add the following lines:
- Add HorizSync 28-64 and VertRefresh 43-60 to the Monitor section. These are keywords, not Options; so add them just as shown.
- Change the mouse ZAxisMapping option to "4 5" instead of "4 5 6 7" or any USB mice you plug in will behave badly.
- Add the DefaultDepth 24 line to the Screen section just below the line that says Monitor "Monitor0"
- In the Screen you'll find the subsection for the 24-bit display. Just below the line that says Depth 24 add a line that says Modes "1280x800"
Install the 915resolution tool from pkgsrc. (ie.. cd /usr/pkgsrc/sysutils/915resolution ; make install)
Replace a mode you know you'll never use with the 32-bit mode for 1280x800. If you don't do this, you won't be able to use the native (1280x800) video mode.
- List all the available modes 915resolution -l
- Pick a mode and replace it: 915resolution 4d 1280 800 32
Test the X server startx
Now add the 915resolution
See also
Changelog
- 2015-11-01: General formatting. Fixed code blocks so they are more readable. Expanded boot loader section (rEFInd section still needs to be written; it is a placeholder for now).
Contents
Requirements
You will need an Apple Power Macintosh G4, the NetBSD macppc installation CD-ROM and a working internet connection. The installation CD-ROM
You can download the NetBSD/macppc installation CD-ROM from the NetBSD ?Mirrors. For example from: ftp://ftp3.de.netbsd.org/pub/NetBSD/iso/3.0.1/macppccd-3.0.1.iso
Booting
Please hold down the Keys Apple+Alt+O+F, after the Apple boot tone sounds to enter OpenFirmeware and enter:
boot cd:,ofwboot.xcf netbsd.macppc
Sysinst
Now, wait until sysinst shows up, but don't start the normal installation. We need to create an Apple HFS partition first that are readable by OpenFirmware. The current NetBSD disklabel isn't able to create the partitions yet.
Configure the Network
Please configure now your integrated gem0 (100baseTX) or wi0 (WLAN) NIC to have an internet connection to match your local network configuration and exit the sysinst program.
Partitioning the hard disk
Now we're going to partition the disk. Enter:
# pdisk /dev/rwd0c
to start the pdisk program. Create a new partition map using i.
Create a partition with offset at 2p with the size 32m and type Apple_HFS using the C option. Name it boot.
Create a partition with offset at 3p with size Xg and bzb bit a (root) using the c option. Name is root. Now if you want to have the whole system on one partition (root) assign the rest of disk space here.
Create a partition with offset at 4p with size Xg and bzb bit b (swap) using the c option. Name is swap.
If you want to create further partitions like /tmp /var /home do the following
Create a partition with offset at 5p with size 1g and bzb bit e (tmp) using c option. Name is tmp.
Create a partition with offset at 6p with size 1g and bzb bit f (var) using c option. Name is var.
Create a partition with offset at 7p with size 5g and bzb bit g (usr) using c option. Name is usr.
Dump the partition configuration using p.
If you have configured the new partition map, it should look like this:
1: Apple partition map
2: 32m HFS boot
3: 1g Apple_UNIX_SVR2 root / a
4: 1g Apple_UNIX_SVR2 swap b
5: 1g Apple_UNIX_SVR2 tmp e
6: 1g Apple_UNIX_SVR2 var f
7: 5g Apple_UNIX_SVR2 usr /usr g
8: *g Apple_UNIX_SVR2 home h
Write the partition map using w.
Newfs
Now newfs your partitions:
# newfs /dev/rwd0x
Replacing x with the partition name.
Note: Don't care about error messages that newfs couldn't touch the disklabel - that's right since we've created everything with pdisk.
Mounting root
Mount the root partition to /mnt2 and create following directories:
# mount /dev/wd0a /mnt2
# cd /mnt2
# mkdir etc tmp usr var home
Create fstab
Create the file fstab that should match your wd0 disklabel configuration.
# echo /dev/wd0a / ffs rw 1 1 >> /mnt2/etc/fstab
# echo /dev/wd0b none swap sw 0 0 >> /mnt2/etc/fstab
# echo /dev/wd0e /tmp ffs rw 1 2 >> /mnt2/etc/fstab
# echo /dev/wd0g /var ffs rw 1 2 >> /mnt2/etc/fstab
# echo /dev/wd0f /usr ffs rw 1 2 >> /mnt2/etc/fstab
# echo /dev/wd0h /home ffs rw 1 2 >> /mnt2/etc/fstab
# echo ptyfs /dev/pts ptyfs rw 0 0 >> /mnt2/etc/fstab
Umount
Unmount /mnt2 again and enter sysinst.
# cd /
# umount /mnt2
# sysinst
Using Sysinst
In sysinst just use Re-install sets and proceed with the installation. After finishing these steps (equivalent to standard installation and which will create all device nodes in /dev) you can configure your Timezone and set a root password through going to the Utilities menu again.
First boot
Now we're ready for the first boot. Exit sysinstall and enter reboot.
# reboot
Leave the installation CD in your drive and enter:
boot cd:,ofwboot.xcf hd:3,/netbsd
Wait until netbsd boots into single user mode and use /bin/sh.
Set your terminal to VT100
# export TERM=vt100
Remount root read-write using:
# mount -uw /
Cd to /etc and edit your rc.conf:
cd /etc
vi rc.conf
Please set:
rc_configured=YES
wscons=YES
hostname=my.powermac.g4
dhcpcd=yes
Please use /etc/ifconfig.gem0 if you don't have a DHCP Server in your network and remove dhcpcd=yes from rc.conf
Right now, set up your network. Either use dhcpcd to configure your network via DHCP
# dhcpcd gem0
or manually specify your network using ifconfig.
# ifconfig gem0 192.168.0.124 netmask 255.255.255.0
Fetch pkgsrc.tar.gz
ftp -a ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-current/tar_files/pkgsrc.tar.gz
and extract it to /usr
tar xvfz pkgsrc.tar.gz -C /usr
Install hfsutils from sysutils
# cd /usr/pkgsrc/sysutils/hfsutils
# make install clean
Format your boot partition through:
# hformat /dev/wd0d
Place your ofwboot.xcf file of your installation cdrom (mount /dev/cd0a /mnt) to that partition:
# hcopy /mnt/ofwboot.xcf :
Reboot and enter OpenFirmware and enter following commands:
# eject cd
You don't need the installation disk anymore.
reset-nvram
setenv auto-boot? false
setenv boot-device hd:2,ofwboot.xcf
setenv boot-file hd:3,/netbsd
reset-all
Next time you reboot your mac you get kicked into OpenFirmware directly. Just type:
boot
and you will boot into NetBSD
XF86Config
As for X11 here is my XF86Config file:
Section "Files"
FontPath "/usr/X11R6/lib/X11/fonts/local/"
FontPath "/usr/X11R6/lib/X11/fonts/misc/"
FontPath "/usr/X11R6/lib/X11/fonts/75dpi/:unscaled"
FontPath "/usr/X11R6/lib/X11/fonts/100dpi/:unscaled"
FontPath "/usr/X11R6/lib/X11/fonts/Type1/"
FontPath "/usr/X11R6/lib/X11/fonts/CID/"
FontPath "/usr/X11R6/lib/X11/fonts/Speedo/"
FontPath "/usr/X11R6/lib/X11/fonts/75dpi/"
FontPath "/usr/X11R6/lib/X11/fonts/100dpi/"
EndSection
Section "ServerFlags"
Option "blank time" "10" # 10 minutes
Option "standby time" "20"
Option "suspend time" "30"
Option "off time" "60"
Option "PCI type" "UniNorth"
EndSection
Section "InputDevice"
Identifier "Keyboard1"
driver "keyboard"
option "protocol" "wskbd"
option "device" "/dev/wskbd1"
Option "AutoRepeat" "500 5"
Option "XkbRules" "xfree86"
Option "XkbModel" "macusb"
Option "XkbLayout" "us"
Option "XkbModel" "macusb"
Option "XkbLayout" "us"
EndSection
Section "InputDevice"
Identifier "Mouse1"
Driver "mouse"
Option "Protocol" "wsmouse"
Option "Device" "/dev/wsmouse0"
Option "ZAxisMapping" "4 5"
EndSection
Section "Monitor"
Identifier "Generic Monitor"
HorizSync 27-100 # multisync
VertRefresh 50-75 # multisync
Option "dpms"
EndSection
Section "Device"
Identifier "Rage128 Pro"
Driver "ati"
BusID "PCI:0:16:0"
EndSection
Section "Screen"
Identifier "Screen1"
Device "Rage128 Pro"
Monitor "Generic Monitor"
DefaultDepth 16
HorizSync 27-100 # multisync
VertRefresh 50-75 # multisync
Option "dpms"
EndSection
Section "Device"
Identifier "Rage128 Pro"
Driver "ati"
BusID "PCI:0:16:0"
EndSection
Section "Screen"
Identifier "Screen1"
Device "Rage128 Pro"
Monitor "Generic Monitor"
DefaultDepth 16
SubSection "Display"
Depth 8
Modes "1280x1024"
EndSubSection
SubSection "Display"
Depth 16
Modes "1024x768"
EndSubSection
EndSection
Section "ServerLayout"
Identifier "Main Layout"
Screen "Screen1"
InputDevice "Mouse1" "CorePointer"
InputDevice "Keyboard1" "CoreKeyboard"
EndSection
That's it, you now have a working NetBSD system on your G4.
Contents
Intro
The MobilePro 790 provides an excellent platform to run NetBSD on. It's very similar to the 780 but with the addition of an internal flash rom. Aside from the fact that all these HPC devices are really starting to show their age, it makes a great system. The 790 is particularly nice because the internal flash rom means the bootloader and kernel image can be stored on the device itself, and there's no mucking around with partitioning your compact flash card.
What you need
- MobilePro 790, obviously. Preferably with a good battery. There's a built in suspend in NetBSD when the battery state is critical, and even when plugged in mine enjoys turning itself off.
- CF Card - At least 128mb for barebones console install, 512MB or more for an install with X.
- Some method of getting the package sets onto the device. Using FTP or NFS with a wired or wireless pcmcia network card is definetly the most efficient method. It is also possible to use another CF card with a pcmcia adapter, mount this, and read the packages off of it. Finally, I suppose it would be possible to use a pcmcia CD or Hard drive adapter of some sort, although I'm not familiar with this road at all.
- psdboot.exe and netbsd file from the 3.0 installation directory. Ungzip the kernel before you use it. Also grab the netbsd-GENERIC file and ungzip it for later use.
Installing
Format your CF card in the OS of your choice. If you're in Windows make sure to format it as Fat16 so CE can read it. Copy psdboot.exe and netbsd (the kernel image) to the card. Pop it in the front CF slot on the MobilePro. Hold shift while you're starting up to bypass that godawful setup routine. Go into my computer and copy the all the files from your CF card (Storage Card2) to the Internal Flash Rom.
Start psdboot.exe from the Internal Flash Rom. Set the kernel location to "\Internal Flash Rom\netbsd". Set the device type to an NEC MobilePro 780. All the preset settings for the 780 seem to carry over fine. Click boot. If all goes well your screen should go to a console and you'll be booting into NetBSD. If you plan on copying your package sets off a second CF card, scroll down and read that section now!
Select install when the installer comes up. Do a custom install and select only the kernel, Base and /etc packages. If you have a large CF card you can select more, but I'd reccomend just doing the bare minimum like this and grabbing them yourself later.
Since we don't need the Fat16 partition on the CF card anymore, select use the whole disk on the next step. On the next step (NetBSD partitioning), I usually just put everything in / because managing space gets to be a nightmare with a bunch of partitions on these small disks. Whether or not you have a swap file is up to you. Most command line apps and even lots of simple X apps will do fine with no swap. If you plan on compiling anything or running any big, modern X apps (read: browsers), you need one. 32mb will do. WARNING: If you do lots of work on your MobilePro that swaps out a lot, the frequent writes will wear down your CF card, as CF cards do have a limited number of writes. It's still debatable how much of an issue this is, but I just thought I'd warn you.
Getting Base Sets over FTP or NFS
Select either FTP or NFS at the Select Medium menu. In either case, network config will come up. Select your interface (wi0 in my case). NetBSD supports a reasonable amount of wired and wireless pcmcia network cards. My Wavelan Gold works great. I won't go into Network settings as this will differ for everyone. Once your network is configured, if you selected NFS, you just have to fill out the hostname and directory that is NFS shared on your remote machine containing the installation sets, and then select continue. Your sets will be copied over and installed.
Getting Base Sets from another CF card in a pcmcia adapter
Download whatever packages you want from the NetBSD FTP (ftp.netbsd.org/pub/NetBSD/NetBSD-3.0/hpcmips/binary/sets). Format your second CF card however you want and stick them on it.
Have the card you're installing to in the CF slot, and your card with the packages in the PCMCIA adapter. Boot into the installer as instructed above, but do not select install just yet. Select Utility Menu, and then Run /bin/sh. When you're at the shell make a directory and mount your second card in it. The device name will be wd1. Example:
cd /mnt
mkdir sets
mount /dev/wd1a /mnt/sets
You might want to check and make sure everything went fine:
cd /mnt/sets
ls
And you should see the sets you put on the card earlier. Now type exit. Exit back to the main menu. Start the install and do steps three and four as instructed. When you get to the Select Medium screen, select local directory. Enter in whatever directory you mounted your second CF card in. (/mnt/sets in the example). When you've set the directory proceed with the installation.
Post Install Config
Once the install system finishes getting and or unpacking the sets you requested, you'll be asked to do some basic config tasks. When it asks you if you want to delete the sets from /usr/INSTALL, do it. They take up way too much space to leave sitting around. Setting the date and time and a root password are pretty much self explanitory. Select reboot when you are done. You may have to press the reset button on the bottom of the unit to get back into Windows CE.
Booting Your New System
Be careful when you reboot the system. Windows CE will want you to format your CF card, be sure to click No. Open up psdboot.exe from the Internal Flash Rom. Change the kernel location to "\Internal Flash Rom\netbsd-GENERIC". Click boot. You should see the familiar boot process and eventually come to a login prompt. Have fun!
X Server
NetBSD hpcmips uses Xhpc, not XFree86. When I try to use the X Server included with 3.0, it gives a "bad display name" error, and then freezes the system. Xhpc doesn't have a config file as far as I can tell. It does some voodoo magic and auto configures each time, so I'm out of luck as far as configuring it. I tried installing the 2.1 X Packages and it was the same story. So far the only X Packages I've tried that have worked fine were the ones all the way back from 1.6.1. It starts without error, and touchscreen works fine. Haven't run into any incompatibility issues with X apps compiled for Newer NetBSD versions so far. The X server from 2.0.2 could possibly work to. I have a feeling that a custom compiled X server would run fine, but I don't have the appropriate build environment to create one, and compiling on the 790 obviously isn't a viable option.
Anything remotley complex (even, for example, the dillo web browser) takes ages to load and will probably go into swap space. Simple applications such as games, text editors, and image viewers work fine. I hate to say it, but don't expect to get any real serious work done in X.
Addendum- X11 over an internal network with XDMCP is an option. There are not very many 10/100 or 802.11g adapters that are compatible with this device (I don't know of any at all, personally), but on such a small screen XDMCP works remarkably well, especially with the Ratpoison window manager.
Misc
The suspend feature seems to be working perfectly. With version 3.1 after a full charge, a MobilePro running NetBSD in suspend mode for 3 hours, followed by an hour of use, followed by 6 hours of suspend mode had 80% battery life remaining. The same device, after undergoing the same test the next day using Windows CE, had 81% battery life remaining.
This describes how to install NetBSD (i386/amd64) using a USB memory stick instead of a CD-ROM Drive.
Contents
With an downloaded image
From NetBSD 5.1.2 on for the i386 and amd64 ports it is possible to download a USB memory stick image for installing instead of downloading and transforming a CD image.
This section describes in detail how to use this method. If you want to create an image yourself, please see below.
Downloading the installation image
Installation images are available on the NetBSD mirrors under the images/ directory, their filenames match the *install.img.gz pattern.
Note that as of 9.2_STABLE, there are two amd64 images, *install.img.gz and *bios-install.img.gz. The latter is intended for older hardware which is unable to boot using a hybrid MBR and GPT image.
For example if we want to download NetBSD 9.2 for amd64:
# ftp ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-9.2/images/NetBSD-9.2-amd64-install.img.gz
Copying the installation image to the memory stick
To prepare the memory stick under a Unix system you can just use
dd(1).
Whenever using dd(1), remember to set the blocksize by specifying the
bs
parameter in order to speed up the write to the installation
media a bit (e.g. 1m).
For example if the memory stick is recognized as sd0
(Warning: this will
overwrite all the contents on your memory stick):
# gunzip NetBSD-9.2-amd64-install.img.gz
# dd if=NetBSD-9.2-amd64-install.img of=/dev/rsd0d bs=1m
In the previous command we have used rsd0d
in order to refer to the whole
sd0
disk.
On Linux the command is similar although it need some minor adjustments, if the
memory stick is recognized as sdb
(Warning: this will overwrite all the
contents on your memory stick):
# gunzip NetBSD-9.2-amd64-install.img.gz
# dd if=NetBSD-9.2-amd64-install.img of=/dev/sdb bs=1M
On Windows you can use rawrite32 to copy the image to the stick.
Installation process
After NetBSD is booted from the memory stick the installation process is
usual (you can find an example in
The NetBSD Guide).
Just pay attention when choosing the installation media: if you want
to install using the installation sets on the memory stick when
choosing the installation
media
select g: local directory
and then clear the base (by default it points
to release/).
Build your own image
Use build.sh -U release install-image
with your usual build settings from your src directory.
Carry on with the instructions after download above.
Manual method
Make the memory stick bootable
First, install the Master Boot Record (MBR):
# fdisk -i /dev/rsd0d
Then, create an fdisk partition for NetBSD:
# fdisk -u /dev/rsd0d
Disk: /dev/rsd0d
NetBSD disklabel disk geometry:
cylinders: 974, heads: 128, sectors/track: 8 (1024 sectors/cylinder)
total sectors: 997375
BIOS disk geometry:
cylinders: 974, heads: 128, sectors/track: 8 (1024 sectors/cylinder)
total sectors: 997375
Do you want to change our idea of what BIOS thinks? [n] n
Partition table:
0: Primary DOS with 32 bit FAT (sysid 11)
start 8, size 997367 (487 MB, Cyls 0-973/127/8)
1: <UNUSED>
2: <UNUSED>
3: <UNUSED>
Bootselector disabled.
Which partition do you want to change?: [none] 0
The data for partition 0 is:
Primary DOS with 32 bit FAT (sysid 11)
start 8, size 997367 (487 MB, Cyls 0-973/127/8)
sysid: [0..255 default: 11] 169
start: [0..974cyl default: 8, 0cyl, 0MB] (RETURN)
size: [0..974cyl default: 997367, 974cyl, 487MB]
bootmenu: [] (RETURN)
Partition table:
0: NetBSD (sysid 169)
start 8, size 997367 (487 MB, Cyls 0-973/127/8)
1: <UNUSED>
2 :<UNUSED>
3: <UNUSED>
Bootselector disabled.
Which partition do you want to change?: [none] (RETURN)
We haven't written the MBR back to disk yet. This is your last chance.
Partition table:
0: NetBSD (sysid 169)
start 8, size 997367 (487 MB, Cyls 0-973/127/8)
1: <UNUSED>
2: <UNUSED>
3: <UNUSED>
Bootselector disabled.
Should we write new partition table? [n] y
After that, set the NetBSD partition active (it's partition number 0):
# fdisk -a /dev/rsd0d
Disk: /dev/rsd0d
NetBSD disklabel disk geometry:
cylinders: 974, heads: 128, sectors/track: 8 (1024 sectors/cylinder)
total sectors: 997375
BIOS disk geometry:
cylinders: 974, heads: 128, sectors/track: 8 (1024 sectors/cylinder)
total sectors: 997375
Partition table:
0: NetBSD (sysid 169)
start 8, size 997367 (487 MB, Cyls 0-973/127/8)
1: <UNUSED>
2: <UNUSED>
3: <UNUSED>
Bootselector disabled.
Do you want to change the active partition? [n] y
Choosing 4 will make no partition active.
active partition: [0..4 default: 4] 0
Are you happy with this choice? [n] y
Then, create the NetBSD disklabel and add the partitions "a" and "d":
# disklabel -i -I sd0
partition> a
Filesystem type [?] [MSDOS]: 4.2BSD
Start offset ('x' to start after partition 'x') [0.0078125c, 8s, 0.00390625M]: 63
Partition size ('$' for all remaining) [973.991c, 997367s, 486.996M]: $
partition> d
Filesystem type [?] [unused]: (RETURN)
Start offset ('x' to start after partition 'x') [0c, 0s, 0M]: (RETURN)
Partition size ('$' for all remaining) [973.999c, 997375s, 487M]: (RETURN)
partition> W
Label disk [n]? y
Label written
We haven't written the MBR back to disk yet. This is your last chance.
Should we write new partition table? [n] y
Next, create a new NetBSD filesystem on partition sd0a:
# newfs /dev/rsd0a
Now, make the partition sd0a bootable:
# mkdir /stick
# mount /dev/sd0a /stick
# cp /usr/mdec/boot /stick
# umount /stick
# installboot -v -o timeout=1 /dev/rsd0a /usr/mdec/bootxx_ffsv1
Copy the installation sets to the memory stick
For the installation you need an installation kernel and the installation sets. To get them, fetch for example a NetBSD CD-image file from a local FTP-Mirror 1:
$ cd /home/mark
$ ftp -a ftp://ftp.netbsd.org/pub/NetBSD/iso/4.0.1/i386cd-4.0.1.iso
Now mount the CD-image file:
$ su
# mkdir /image
# vnconfig -c vnd0 /home/mark/i386cd-4.0.1.iso
# mount_cd9660 /dev/vnd0d /image
And then, mount the memory stick and copy the install kernel and sets:
# mount /dev/sd0a /stick
# cp /image/i386/binary/kernel/netbsd-INSTALL.gz /stick/netbsd.gz
# cp -R /image/i386/binary/sets /stick/sets
# umount /stick
# rmdir /stick
Now you can unmount the CD-image:
# umount /image
# vnconfig -u vnd0
# rmdir /image
The memory stick is now ready to boot the NetBSD-Install system. Just reboot and change your BIOS to boot the USB memory stick.
The installation process
If the memory stick boots fine, proceed with the installation as usual, but the selection of the install sets is not quite intuitive:
"Your disk is now ready for installing the kernel and the distributions sets [...]"
[...]
Install from
f: Unmounted fs
Press RETURN and the following screen appears:
"Enter the unmounted local device and directory on that device where the distribution is located. [...]"
Choose the following options:
a: Device sd0a
b: File system ffs
c: Base directory
d: Set directory /sets
Yes, "c: Base directory" is left empty, because we had copied the distribution .tgz files to the /sets directory on the memory stick (9.)
Now continue with the installation as usual. Good luck!
Alternative Method
An alternative setup method saves space on the stick at the expense of sysinst automation and is therefore more advanced. This method skips the sysinst tool by copying the sets and the normal GENERIC kernel instead of the install kernel.
Extract the sets from the hard disk directly on to the memory stick (/mnt):
# tar xvfzp sets.tgz -C /mnt
Extract the kernel to the target root:
# tar xvfzp GENERIC-kernel.tgz -C /mnt
All you need to do is now to create a valid /etc/fstab and modify /etc/rc.conf to RC_CONFIGURED=yes on the target root (/mnt) and reboot. All fine tuning can be done when you're logged in.
Using Samsung provided drivers
It's not a great science as it almost works out-of-the-box. All you have to do is to download the driver from samsung web page and copy two files into specified directory of CUPS system.
The steps are as following:
Download following file Linux Driver for CUPS.
Extract the
*tar.gz
file,cd
to the directorycdroot
and copy the following files:
cp Linux/i386/at_root/usr/lib/cups/filter/rastertosamsungpcl /usr/pkg/libexec/cups/filter/ cp Linux/noarch/at_opt/share/ppd/ML-1640spl2.ppd /usr/pkg/share/cups/model/
- After that execute (or reboot the machine)
/etc/rc.d/cupds restart
and your Samsung ML-1640 will work just out-of-the-box.
Enjoy !
Using a native driver
Nowadays, the best way to get a wide range of Samsung and Xerox laser printers working with CUPS is using the Splix drivers. Since it's not in pkgsrc yet, you have to download and build it yourself.
$ wget http://ufpr.dl.sourceforge.net/sourceforge/splix/splix-2.0.0.tar.bz2 ;# or download from any sourceforge mirror
$ tar jxf splix-2.0.0.tar.bz2
$ cd splix-2.0.0
To build the driver we need CUPS (print/cups) and GNU Make (devel/gmake) installed from pkgsrc. Also, we can optionally disable JBIG support or install wip/jbigkit to fulfill its dependencies. In this example, I'm about to disable it
$ DISABLE_JBIG=1 gmake
$ su root -c 'gmake install'
Now the driver is installed, along with the PPD files. You can add the desired printer(s) the usual way.
In this article I will explain, step by step, how to install a NetBSD server with a root LFS partition.
Contents
Foreword
Since LFS is considered experimental, it is highly advised to test this setup on a testbed / Virtual Machine. Use at your own risk.
In this setup, the server will solely run under LFS without any FFS partitions.
There are a lot of ways to accomplish this task. This is how I do it.
What is LFS
LFS is an implementation of a log-structured file system.
For example Sun's ZFS is a log-structured file system.
What are the advantages
LFS can recover faster from a crash, because it does not need to fsck the whole disk. It is faster than FFS.
What are the disadvantages
It has never worked reliably.
It is limited to 2 Terabytes.
It does not perform very well at constant high disk activity like ftp uploads/downloads.
It can't handle situations where the disc is almost full, i.e. it usually crashes requiring a newfs, though most data can be recovered manually.
How do we aproach
We need to install a NetBSD system from scratch without sysinst, since sysinst is lacking LFS support at the moment. This may change in the future. Requirements
Physical access to the server.
We need a NetBSD liveCD to access the hard disks. Only liveCDs with LFS support compiled in will work. Therefore please download the Jibbed LiveCD from http://www.jibbed.org.
We will also need the NetBSD sets (base.tgz, comp.tgz, etc.tgz, man.tgz, misc.tgz, text.tgz ...). It is recommended to download all sets.
You can either download the latest sets from the NetBSD autobuild cluster (ftp://ftp.netbsd.org/pub/NetBSD-daily/HEAD/) or you can build your own release and use your own sets. I recommend to use the latest NetBSD sources.
A tutorial on how to build current can be found here: ?How to build NetBSD-current.
The sets have to be accessible from the liveCD in some way. For example via http, ftp or scp.
Booting from the liveCD
Please boot into the liveCD on the server you want to install.
Gain root privileges (su -).
fdisk
Use fdisk to create an active NetBSD (ID 169) partition.
# fdisk -iu wd0
disklabel
Use disklabel to prepare your disk. This part of the tutorial is with purpose not very detailed. You should get comfortable with disklabel beforehand.
Enter
# disklabel -i -I wd0
on the command line to enter the interactive disklabel menu. I am assuming you are using wd0. Otherwise substitute with your drive (sd0, ld0...)
We will create one big "a" partition in this example. Feel free to try another setup in your second try.
In disklabel create one big partition "a" spanning the whole disk starting from sector 63 (63s) until the end minus the space you want to give to the swap partition.
Use 4.4LFS as your file system.
Partition b is used as swap. Start from the end of partition a until the end ($).
Partition c and d are the disks itself and should be of type unused starting from 0 to the end.
Remove all other partitions (e-p).
When you are finished your label should look like this:
# size offset fstype [fsize bsize cpg/sgs]
a: 73400320 63 4.4LFS 0 0 0 # (Cyl. 0*- 72817*)
b: 2097152 73400383 swap # (Cyl. 72817*- 77504*)
c: 78124937 63 unused 0 0 # (Cyl. 0*- 77504*)
d: 78125000 0 unused 0 0 # (Cyl. 0 - 77504*)
Label the disk (N), Write changes to disk (W), and quit (Q).
newfs_lfs
You can now create the LFS filesystem on the disk you just labeled.
# newfs_lfs wd0a
There are more options like -A and different segment and frag sizes. But we will stick to the default 1M segment size, since other values may get LFS unstable.
mounting
The rest is trivial. We mount the filesystem and extract our sets.
# mkdir /tmp/targetroot
# mount /dev/wd0a /tmp/targetroot
Create another directory to store the sets in.
# mkdir /tmp/sets
Change in that directory
# cd /tmp/sets
And download your sets, for example via ftp. This are the sets you have prepared upfront by either compiling a release or downloading them from the autobuild cluster.
# ftp 192.168.0.200
...
extracting the sets
extract your sets using option -p (important).
# cd /tmp/sets
# tar xvzpf base.tgz -C /tmp/targetroot
repeat with all your sets, but extract only one GENERIC kernel named kern-GENERIC.tgz
configure the new system
change into /tmp/targetroot and do a base configuration. Edit etc/fstab
/dev/wd0a / lfs rw 1 1
/dev/wd0b none swap sw 0 0
ptyfs /dev/pts ptyfs rw 0 0
tmpfs /tmp tmpfs rw
ptyfs and tmpfs are optional, but recommended.
Edit etc/rc.conf
rc_configured=yes
bootstrap
Copy boot to the targetroot.
# cp /tmp/targetroot/usr/mdec/boot /tmp/targetroot
And bootstrap
# /usr/sbin/installboot -v -m i386 -o timeout=5,console=pc /dev/rwd0a /tmp/targetroot/usr/mdec/bootxx_lfsv2
creating devices
Don't forget to create all devices.
# cd /tmp/targetroot/dev
# ./MAKEDEV all
this may take a while.
reboot
That's it. Sync and reboot.
# sync
# sync
# sync
# reboot
If everything went well, your system should boot. Once you have logged in, you can configure your system and do all the fine tuning.
Disk capacity
You should not fill up your LFS partition over 75%. This could damage the file system (at the moment).
Remote installation
If you want to install an LFS root file system on your server in your data center remotely, console access is beneficial, but not necessary. The minimum requirement is a rescue console. This is mostly a linux ramdisk. One way is to build a custom boot floppy including LFS and newfs_lfs. Because newfs_lfs does not fit on the disk, you have to exclude unnecessary tools. Then write a small shell script that is executed when you boot the floppy, summing up all steps in the tutorial including adding a user account and setting up ifconfig, resolv.conf, default gateway, to be able to log in afterwards. Make a backup of the first 5 MB you are going to overwrite with dd. Now just dd the floppy image to server harddisk and reboot. Good luck.
Contents
Preface
The goal of this article is to help deal with networks under the NetBSD. The NetBSD Operating System provides a number of usable network tools to deal with particular and general network whereabouts. Use of these tools is briefly explained here.
Before we start, anyone should realize that computer hosts (and/or different network devices) must be connected via cable, which provides physical connection into the network. This article touches network tools in general and doesn't explain wireless network in particular.
Physical connection is done (e.g. all cables connected). All hosts and network devices should also bear the logical connection. Only both, physical and logical connections enables devices to talk together (e.g. enables information interchange).
First, the physical part of the network. Very often networks look like a number of hosts connected into single or number of hubs or concentrators e.g. the physical connection is presented. Precisely check these hubs. Misunderstanding can happen just in one of the devices. Instead of the simple hub it can be a switch. Switches may have vlan or similar options enabled specifically to decline connection of particular hosts or group of hosts. All hubs and switches should be checked (usually through telnet or web interface) to verify connection options in use. Sometimes memory reset needs to be performed to achieve absolute assurance of absence network blocking options.
Second, the logical part of the network. Hosts even if they are materially connected together may work withing different logical networks. For an example, one network is 192.168.1.X and another network is 192.168.2.X, etc. This is two different networks. Hosts from different logical networks can not interchange information. Special options as address translation, routing, etc. allows to, and enable information interchange within different networks. When hosts connected within one network like 192.168.1.X, for an example Host-A has address 192.168.1.10 and Host-B has address 192.168.1.20, logical information interchange is enabled.
Third. One from many NetBSD advantages is the feature that single computer can utilize several different network cards. The same computer can service large number of different networks. The same computer can provide routing and network address translation options per your desire. The same computer can refer into number of Dynamic Names Servers DNS. The same computer can provide services such as an independent ?DNS-server or ?Apache web-server. NetBSD also provides a number of tools and features which allows to test the network.
dmesg
Use dmesg(8) to obtain information about network adapters on the NetBSD machine:
$ dmesg | more
$ less /var/run/dmesg.boot
Dmesg provides basic information about installed network adapter. For example:
rtk0 at pci3 dev 0 function 0: Realtek 8139 10/100BaseTX
rtk0: interrupting at irq 5
rtk0: Ethernet address 01:00:25:28:fa:c0
This means that network card has Realtek chip and the system calls it - rtk0. A computer can have 1, 2 or even 5 different network cards installed and it is ok. Number of available slots in the mainboard limits number of network cards.
In BSD the network interface name contains the name of the network driver. rtk is the network driver for Realtek 8129/8139 based network cards. rtk0 is the first network card with the rtk chip. A second same card would be named rtk1.
ifconfig
Use ifconfig to see which network cards are in use:
$ ifconfig -a
rtk0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
address: 00:00:21:20:fa:c0
media: Ethernet autoselect (none)
status: active
inet 192.168.17.1 netmask 0xffffff00 broadcast 192.168.17.255
inet alias 192.168.18.1 netmask 0xffffff00 broadcast 192.168.18.255
inet6 fe80::200:21ff:fe20:fac0%rtk0 prefixlen 64 scopeid 0x1
lo0: flags=8009<UP,LOOPBACK,MULTICAST> mtu 33192
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
You can see the card rtk0 is UP and running with status: active. It does have the current IP address: 192.168.17.1 This card also has a second IP address 192.168.18.1 which is an alias. The network card configuration is stored in:
/etc/ifconfig.rtk0
Usually ifconfig.rtk0 have following content:
inet 192.168.17.1 netmask 255.255.255.0
Where: 192.168.17.X is the network. 192.168.17.1 is the address of host e.g. the address of this machine. 255.255.255.0 is subnet (netmask) which allows to access 254 hosts of the network.
In the case from above ifconfig.rtk0 have following:
inet 192.168.17.1 netmask 255.255.255.0
inet 192.168.18.1 netmask 255.255.255.0 alias
This content demonstrates ability to have access not only one but two or more networks, with different addresses of this host. Additional addresses referred as alias.
Note, ifconfig.rtk0 may also have content like this:
inet 192.168.17.1 netmask 255.255.255.254
In this example subnet defines the number of hosts within the network. Use of the 255.255.255.254 subnet is limiting number of hosts up to 2 hosts. Count starts from the address of the machine and continues up further.
ping
Use the ping(8) command to check plain network connections of other devices (e.g. computers, printers, VoIP phones, etc.) which is connected to your the network.
For example, start from IP address of single computer to see if it is alive:
# ping -n 192.168.17.1
It returns:
64 bytes from 192.168.17.1: icmp_seq=0 ttl=255 time=0.049 ms
64 bytes from 192.168.17.1: icmp_seq=1 ttl=255 time=0.051 ms
64 bytes from 192.168.17.1: icmp_seq=2 ttl=255 time=0.050 ms
Stop it by pressing CTRL+C.
Please note that return-replies to our ping request means that the host is up and running.
Also check the the response. It may be come from a different IP address. Subnet, router or intermediary switch, may bear special route or have altered address, and answer calls on the middle of route.
nmap
Install and use net/nmap to see list of all hosts in your the network. After installation of nmap, just run:
# nmap -sP 192.168.17.1-254
or
# nmap -sP 192.168.17.*
Asterisk and 1-254 means that nmap checks the whole 192.168.17/24 network.
nmap is a very powerful network tool. Please use with care.
mygate
/etc/mygate contains the IP address of your default gateway. The address of such router may be written in ?/etc/mygate file, like this:
192.168.170.201
or you can use:
defaultroute="192.168.170.201"
in your /etc/rc.conf
Some systems have route added into /etc/rc.conf file or routing software installed like Zebra, Quagga, etc. to deal with TCP, BGP, OSPF and other protocols. In this case /etc/mygate file may be empty.
netstat
The netstat(1) command shows network status, command symbolically displays the contents of various network-related data.
To see network relating routing tables do:
# netstat -rn
It returs somthing like this:
Internet:
Destination Gateway Flags Refs Use Mtu Interface
default 192.168.170.201 UGS 1 34064 - rtk0
loopback/8 localhost UGRS 0 0 33192 lo0
localhost localhost UH 1 6 33192 lo0
192.168.170/24 link#1 UC 6 0 - rtk0
192.168.170.201 00:60:97:51:d1:d0 UHLc 2 7121 - rtk0
192.168.170.216 00:00:21:2b:d5:9b UHLc 0 71 - lo0
192.168.170.255 link#1 UHLc 3 787 - rtk0
This output means.
Your Network Interface Card (NIC) is here:
192.168.170.216 00:00:21:2b:d5:9b UHLc 0 71 - lo0
You have link#1 into 192.168.170.X network:
192.168.170/24 link#1 UC 6 0 - rtk0
Your default Gateway (e.g. IP address of router connected to your network) is here:
default 192.168.170.201 UGS 1 34064 - rtk0
You have to note, two columns and two lines that are important for routing:
Destination Gateway Flags Refs Use Mtu Interface
default 192.168.170.201 UGS 1 34064 - rtk0
192.168.170.201 00:60:97:51:d1:d0 UHLc 2 7121 - rtk0
Means, NIC 00:60:97:51:d1:d0 with IP 192.168.170.201 is listened by your card; and this computer uses this particular IP address as a gateway to other part of network.
sockstat
The sockstat is a handy tool to list open sockets. It is commonly used to list all listening sockets:
# sockstat -l
USER COMMAND PID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS
root dhcpcd 103 6 stream /var/run/dhcpcd.sock -
root syslogd 157 3 dgram /var/run/log -
root sshd 254 4 tcp *.ssh *.*
root lpd 283 5 stream /var/run/printer -
root lpd 283 6 tcp6 *.printer *.*
root lpd 283 7 tcp *.printer *.*
root Xorg 319 1 tcp6 *.x11 *.*
root Xorg 319 3 stream /tmp/.X11-unix/X0 -
root master 491 12 tcp *.smtp *.*
root master 491 13 tcp6 *.smtp *.*
ipnat
Check contents of /etc/rc.local file. It can maintain following lines:
sysctl -w net.inet.ip.forwarding=1
ipnat -f /etc/ipnat.conf
This enables address translation. Check contents of /etc/ipnat.conf file:
map rtk0 192.168.1.0/24 -> 91.193.165.158/32 proxy port ftp ftp/tcp
map rtk0 192.168.1.0/24 -> 91.193.165.158/32 portmap tcp/udp 10000:20000
map rtk0 192.168.1.0/24 -> 91.193.165.158/32
This orders network card rtk0 to translate all addresses heard from the 192.168.1.X network into only one address 91.193.165.158
The reasons to translate are simple. ISP's usually provide customers single or small set of IP addresses and don't deal with customers networks at all. Address translation of large internal networks into small set of public IP address or even into single address allows to reach whole other part of the network. For en example to provide Internet access. Here, single IP address services network whith bandwidth of internal addresses from 192.168.1.0 to 192.168.1.255 (i.g. 254 hosts) Such mapping can be very flexible. For an example for single address:
map rtk0 192.168.2.2/32 -> 91.193.165.158/32 proxy port ftp ftp/tcp
map rtk0 192.168.2.2/32 -> 91.193.165.158/32 portmap tcp/udp 10000:20000
map rtk0 192.168.2.2/32 -> 91.193.165.158/32
For network with 254 hosts:
map rtk0 192.168.2.0/24 -> 91.193.165.158/32 proxy port ftp ftp/tcp
map rtk0 192.168.2.0/24 -> 91.193.165.158/32 portmap tcp/udp 10000:20000
map rtk0 192.168.2.0/24 -> 91.193.165.158/32
Or much more globally:
map rtk0 0.0.0.0/0 -> 91.193.165.158/32 proxy port ftp ftp/tcp
map rtk0 0.0.0.0/0 -> 91.193.165.158/32 portmap tcp/udp 10000:20000
map rtk0 0.0.0.0/0 -> 91.193.165.158/32
Note, ipnat translation is netmask sensitive thus you have to use only correct subnets.
subnets
Very often IP addresses bears additional slash and number, for an example: 192.168.2.2/24 or /32, etc. That's called the CIDR notation.
To deal with subnets look for this table: Image:Tablitsa1.jpg Image:Tablitsa2.jpg
For an additional calculations, you can use net/sipcalc.
More information can be found at Subnetwork
ipfilter
Because both ipfilter and ipnat work together, check contents of ipf.conf file. Simple example is here:
pass in from any to any
pass out from any to any
However, some network cards, addresses and networks can be blocked or open.
For advanced configuration see IPFilter resources and Packet Filter.
resolv.conf(5)
The /etc/resolv.conf contains the addresses of nameservers, that allows to resolve alphabetical hostnames into numeric IP addresses.
Note. The addresses of the DNS nameservers are usually obtained (when connecting to the internet) from your Internet Service Provider. You can also use any public DNS nameservers. You can set up your own ?DNS Server with NetBSD.
In order to see which nameserver(s) are in use check contents of /etc/resolv.conf file:
nameserver 145.253.2.75
nameserver 194.146.112.194
If the computer is the DNS server, the IP address usually is set like this:
nameserver 127.0.0.1
If you experience 5-30 second delays while surfing, check with dig(1), if all referred nameservers are really accessible, e.g:
$ dig @145.253.2.75 A www.netbsd.org
See also
DTrace is a Dynamic Tracing framework developed by Sun and ported to NetBSD. It enables extensive instrumentation of the kernel and user space. See the DTrace Community Page for more information. Also see DTrace Introduction, Brendan Gregg's DTrace one liners and his notes for DTrace on FreeBSD.
Current status
Supported platforms
DTrace is a work-in-progress effort and it is for x86 systems and some arm boards.
- i386 and amd64
- earm* (evbarm and armv4 based ports)
Supported providers
- DTrace: What to do when a script BEGINs, ENDs, ERRORs
- FBT: Function Boundary Tracing
- IO: Disk I/O
- Lockstat: Kernel Lock Statistics
- Proc: Process and thread related events
- Profile: Time based interrupt event source for Profiling
- SDT: Statically Defined Tracing
- Syscall: System Calls
- Syscall Linux (32bit & 64 bit): System calls via the Linux binary emulation layer
- VFS: Filesystem operations (confined to namecache events at time of writing - 8.99.22)
TODO for netbsd-7
- Measure effect of
options KDTRACE_HOOKS
on system performance. - Determine whether the profile module works and list it here.
- Integrate riz's syscall provider patch.
How to use
Building DTrace
You need the following options in your kernel:
options KDTRACE_HOOKS # kernel DTrace hooks
options MODULAR
Optionally:
options INSECURE # permit modules to loaded from user space once system has gone multiuser and securelevel has been raised.
A Distribution needs to be built with the options MKDTRACE=yes
and MKCTF=yes
, this is taken care of automatically and doesn't need to be specified manually. The list of platforms it is applied to automatically is set in src/share/mk/bsd.own.mk
Set the system to load the solaris and dtrace related modules in /etc/modules.conf
, for a list of available modules, see /stand/$MACHINE/$VERSION/modules/
For example, add the following to /etc/modules.conf
(the file may not exist already on a system):
solaris
dtrace
dtrace_fbt
dtrace_lockstat
dtrace_profile
dtrace_sdt
dtrace_syscall
dtrace_syscall_linux
A dtrace
device node is created automatically in /dev/dtrace
when the modules are loaded into place.
List the dtrace probes
dtrace -l
ID PROVIDER MODULE FUNCTION NAME
1 dtrace BEGIN
2 dtrace END
3 dtrace ERROR
4 fbt netbsd AcpiAcquireGlobalLock entry
5 fbt netbsd AcpiAcquireGlobalLock return
6 fbt netbsd AcpiAllocateRootTable entry
7 fbt netbsd AcpiAttachData entry
.
.
29129 fbt solaris zfs_vop_getattr entry
29130 fbt solaris zfs_vop_getattr return
29131 proc create
29132 proc exec
.
.
29140 proc lwp_start
29141 proc lwp_exit
Running hello world
Put the following into the file hello.d:
BEGIN
{
trace("Hello world");
exit(0);
}
Run the hello world script:
dtrace -s hello.d
dtrace: script './hello.d' matched 1 probe
CPU ID FUNCTION:NAME
0 1 :BEGIN Hello world
The same script could be executed as a one liner on the shell, using
dtrace -n 'BEGIN { trace("Hello world"); exit(0); }'
A more complex example
The following script traces the execution of a sleep operation in the kernel. Put it in sleep.d:
#pragma D option flowindent
syscall::nanosleep:entry
/execname == "sleep" && guard++ == 0/
{
self->traceme = 1;
}
fbt:::
/self->traceme/
{}
syscall::nanosleep:return
/self->traceme/
{
self->traceme = 0;
exit(0);
}
Start the script running:
dtrace -s sleep.d
This will take a while as the script instruments every function in the kernel. When it's ready, it will print a message like "dtrace: script 'sleep.d' matched 59268 probes". Then execute a "sleep 2" in another shell.
Tools included in base
Starting with NetBSD-8, on builds where MKDTRACE=yes
is set, scripts from
Brendan Gregg's DTrace toolkit are installed in base as standard.
At present, the following scripts are installed in /usr/sbin
:
dtruss
- An implementation of the truss utility in DTrace which traces the system calls made by a processexecsnoop
- snoop on execution of processes as they occuropensnoop
- snoop on openning of files as they occurprocsystime
- print process system call time details.
Troubleshooting
The Compact C Type Format (CTF) has a 215 limit on types which can overflow, this prevents DTrace from working correctly.
Check the number of types using ctfdump
e.g
ctfdump -S /netbsd
Note the line which states total number of types
, the value should by less than 32768.
If overflow is not an issue, libdtrace(3)
can provide some insight into what is going on via an
environment variable. Define DTRACE_DEBUG
before tracing.
env DTRACE_DEBUG= execsnoop
Contents
Introduction
The purpose of this document is to guide you to create a RAM disk image and a custom kernel in order to boot your mini NetBSD off a Compact Flash or use it to debug it in an emulated environment, such as qemu.
The ramdisk image will have to be inserted into your kernel and then extracted to memory of your embedded device (or your emulator) and used as the root file system, which then can be used by your kernel as if it resided on a "normal" storage media like a hard drive.
The steps below were tested in a NetBSD 4.99.20 i386 box.
Create the ramdisk
First we need to create the ramdisk that will get embedded into the kernel. The ramdisk contains a filesystem with whatever tools are needed, usually ?basics/init and some tools like sysinst, ls(1), etc.
Standard ramdisk
To create the standard ramdisk (assuming your source tree lies at /usr/src
and you have /usr/obj
and /usr/tools
):
# cd /usr/src
# ./build.sh -O ../obj -T ../tools -u tools
# cd /usr/src/etc/
# make MAKEDEV
# cd /usr/src/distrib/i386/ramdisks/ramdisk-big
# make TOOLDIR=/usr/tools
Custom ramdisk
If you want to customize the contents of the filesystem, customize the list
file. Let's say for example that we need the ?basics/uname utility to be included in the ramdisk, which is not by default.
cd /usr/src
# ./build.sh -O ../obj -T ../tools -u tools
# cd /usr/src/etc/
# make MAKEDEV
# cd /usr/src/distrib/i386/ramdisks/ramdisk-big
# cp list list.old
Then we edit the list
file, by adding the following line:
PROG bin/uname
And after having done it:
# make TOOLDIR=/usr/tools
Either way, you will get something like this:
# create ramdisk-big/ramdisk-big.fs
Calculated size of `ramdisk-big.fs.tmp': 5120000 bytes, 65 inodes
Extent size set to 4096
ramdisk-big.fs.tmp: 4.9MB (10000 sectors) block size 4096, fragment size 512
using 1 cylinder groups of 4.88MB, 1250 blks, 96 inodes.
super-block backups (for fsck -b #) at:
32,
Populating `ramdisk-big.fs.tmp'
Image `ramdisk-big.fs.tmp' complete
And verify with:
$ ls -lh ramdisk-big.fs
-rwxr-xr-x 1 root wheel 4.9M Jun 19 08:33 ramdisk-big.fs
Build the kernel
Next we shall build our custom kernel with ramdisk support. We may choose any INSTALL* kernel configuration. Here, we will use INSTALL_TINY:
# cd /usr/src/sys/arch/i386/conf
# cp INSTALL_TINY MY_INSTALL_TINY
Then we edit MY_INSTALL_TINY
file, and we go to the section:
# Enable the hooks used for initializing the root memory-disk.
options MEMORY_DISK_HOOKS
options MEMORY_DISK_IS_ROOT # force root on memory disk
options MEMORY_DISK_SERVER=0 # no userspace memory disk support
options MEMORY_DISK_ROOT_SIZE=3100 # size of memory disk, in blocks
The size of MEMORY_DISK_ROOT_SIZE
must be equal or bigger than the size of your image. To calculate the kernel value you can use following rule:
MEMORY_DISK_ROOT_SIZE=10000 would give 10000*512/1024 = 5000 kb
We check that the following lines are un-commented:
pseudo-device md 1 # memory disk device (ramdisk)
file-system MFS # memory file system
Once we are done with the configuration file, we proceed with building our kernel:
# cd /usr/src
# ./build.sh -O ../obj -T ../tools -u kernel=MY_INSTALL_TINY
Insert ramdisk to kernel
Having built our kernel, we may now insert the ramdisk to the kernel itself:
# cd /usr/src/distrib/i386/instkernel
# cp Makefile Makefile.old
Then we edit the Makefile
, to make sure that the RAMDISKS
and MDSETTARGETS
variables are set properly. After the modifications, mine looks like this:
# $NetBSD: how_to_create_bootable_netbsd_image.mdwn,v 1.3 2014/04/03 03:27:42 wiki Exp $
.include <bsd.own.mk>
.include "${NETBSDSRCDIR}/distrib/common/Makefile.distrib"
# create ${RAMDISK_*} variables
#
RAMDISKS= RAMDISK_B ramdisk-big
.for V F in ${RAMDISKS}
${V}DIR!= cd ${.CURDIR}/../ramdisks/${F} && ${PRINTOBJDIR}
${V}= ${${V}DIR}/${F}.fs
.endfor
MDSETTARGETS= MY_INSTALL_TINY ${RAMDISK_B} -
MDSET_RELEASEDIR= binary/kernel
.include "${DISTRIBDIR}/common/Makefile.mdset"
.include <bsd.prog.mk>
Next write:
make KERNOBJDIR=/usr/obj/sys/arch/i386/compile TOOLDIR=/usr/tools
Should you encounter errors of the following form, try increasing the MEMORY_DISK_ROOT_SIZE
at your kernel configuration file.
i386--netbsdelf-mdsetimage: fs image (5120000 bytes) too big for buffer (1587200 bytes)
*** Error code 1
Provided that everything went ok, you will have the following files in your current directory:
# ls -lh netbsd*
-rwxr-xr-x 1 root wheel 21M Jun 20 11:07 netbsd-MY_INSTALL_TINY
-rw-r--r-- 1 root wheel 1.5M Jun 20 11:07 netbsd-MY_INSTALL_TINY.gz
-rw-r--r-- 1 root wheel 58K Jun 20 11:07 netbsd-MY_INSTALL_TINY.symbols.gz
Make image bootable
Finally:
# cd /usr/src/distrib/i386/floppies/bootfloppy-big
# cp Makefile Makefile.old
Edit the Makefile if your kernel config has a different name than INSTALL*. Replace FLOPPYKERNEL= netbsd-INSTALL.gz
with FLOPPYKERNEL= netbsd-MY_INSTALL_TINY.gz
(Where MY_INSTALL_TINY is of course the name of the custom kernel.)
# make TOOLDIR=/usr/tools
[...]
Final result:
-rw-r--r-- 1 root wheel 2949120 Jun 20 11:15 boot-big1.fs
Now you are ready to test your image, with qemu for example:
$ cd /usr/src/distrib/i386/floppies/bootfloppy-big
$ qemu boot-big1.fs
Credits
This article is based on ?Yazzy's work, who was kind enough to permit us include an article in this wiki. The original document may be found here.
Additional related links
What is Mercurial ( HG )
We will not discuss this in here but probably everyone visited the homepage of the project : http://www.selenic.com/mercurial/wiki/ This DRCS has few advantages over others like CVS,SVN and many more. I will just say few of them in here :
- It is easy to learn and use.
- It is lightweight.
- It scales excellently.
- It is easy to customise.
Besides the nature of HG we will make in this howto a central server for committing work, let's say it's a central repository server which you need to create for your own reasons. Requirements in this howto are :
- nginx 0.6.x or better
- repository
- mercurial 0.9.5 or better
- spawn-fcgi from lighttpd
- zip 2.32 or better
- python 2.5 or better
- htpasswd from Apache
First let's start with nginx configuration over HTTPS.
Nginx Configuration
The configuration in nginx is not that hard but somewhat tricky, to make it easy i will give examples in here so there is better understanding from the viewer. Our needed section is actualy only over SSL and port 443 :
server {
listen 443;
keepalive_timeout 70;
server_name <IP_ADDRESS> your.domain.org;
ssl on;
ssl_certificate /usr/pkg/etc/nginx/cert.pem;
ssl_certificate_key /usr/pkg/etc/nginx/cert.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
access_log /var/log/nginx-https-access.log;
location / {
auth_basic "closed repository";
auth_basic_user_file access/htfile;
fastcgi_pass 127.0.0.1:10000;
fastcgi_param SCRIPT_FILENAME /path/to/repo$fastcgi_script_name;
fastcgi_param PATH_INFO $uri;
include fastcgi_params;
}
location /project_a/ {
auth_basic "closed project";
auth_basic_user_file access/htfile;
fastcgi_pass 127.0.0.1:10000;
fastcgi_param SCRIPT_FILENAME /path/to/repo$fastcgi_script_name;
fastcgi_param PATH_INFO $uri;
include fastcgi_params;
}
}
In this example make sure you change IP_ADDRESS, your.domain.org, project_a and /path/to/repo. Our access/htfile file for the particular base directory is located in nginx configuration folder which is your password file. In this example i use same file but if you however need different ro access for base / and project_a you may use different password files in order to separate the users.This way we define our read access to the repository.
To create our password file auth_basic_user_file htfile we need to use htpasswd or other tool to create it. We can do that with the following command :
# htpasswd -c <new_pass_file> <user_to_add>
With this we create the new password file and add username . Adding user to already existing file can be done with :
# htpasswd <your_pass_file> <user_to_add>
HG configs
I will start first with a simple config file of a project in your repository which will show how we handle our rw access.
our hgrc file looks like this :
[web]
style = gitweb
name = project_a
description = Description of Project_A
contact = foo@domain.org
allow_archive = bz2 gz zip
allow_push = user1,user2
push_ssl = false
As you can see from hgrc config file in your project .hg/ folder we use standard options for description, contact, name of the project. Important options in here is to describe what users have the right to write in this project we do this with allow_push. In our case user1 and user2 can write. Option push_ssl is set to false because we do not need to encrypt again the connection as it already passes through HTTPS. Next step is to make our main configuration files.
Copy your hgwebdir.fcgi to the repository folder and change the following line :
return hgwebdir("/path/to/repo/hgweb.config")
After that create in /path/to/repo file hgweb.config and include the following options or add more if you feel the need to :
[paths]
# projects
project_a = /path/to/repo/project_a
[web]
style = gitweb
[trusted]
user = *
group = *
Make sure you have every project listed in here. After all this is setup fix the permissions of your repository to match those of your web server in our case this is user nobody group nogroup. Next step is to start nginx web server. Make sure you have set the appropriate number of worker_processes in the configuration file and start our spawn-fcgi daemon with the following command :
/root/spawn-fcgi -f /home/repo/hgwebdir.fcgi -a 127.0.0.1 -p 10000 -u nobody -g nogroup 2 > & 1
NOTE: please if you see any typos or incorrect information send me email at nkalev at bsdtrap dot org.
atactl
S.M.A.R.T. is a monitoring tool for hard disks. With NetBSD you can show the smart values using following command:
# atactl wd0 smart status
SMART supported, SMART enabled
id value thresh crit collect reliability description raw
1 100 0 no online positive Raw read error rate 2446
4 98 0 no online positive Start/stop count 2492
5 253 9 yes online positive Reallocated sector count 0
7 253 51 yes online positive Seek error rate 0
8 253 0 no offline positive Seek time performance 0
9 99 0 no online positive Power-on hours count 1112403
12 99 0 no online positive Device power cycle count 1418
194 161 0 no online positive Temperature 53 Lifetime max/min 0/0
197 253 9 yes online positive Current pending sector 0
198 253 9 yes offline positive Offline uncorrectable 0
199 100 0 no online positive Ultra DMA CRC error count 0
200 100 51 yes online positive Write error rate 0
201 100 51 yes online positive Soft read error rate 0
smartctl
Another utility to monitor hard disks is smartctl. This utility is part of the sysutils/smartmontools package.
# cd /usr/pkgsrc/sysutils/smartmontools/
# make package
With smartctl you can show the smart values using following command:
# smartctl -s on /dev/sd0d
smartctl version 5.36 [i386--netbsdelf] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
Informational Exceptions (SMART) enabled
Temperature warning enabled
# smartctl -a /dev/sd0d
smartctl version 5.36 [i386--netbsdelf] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
Device: MAXTOR ATLAS10K5_300WLS Version: HPM7
Serial number: J8033RSK
Device type: disk
Transport protocol: Parallel SCSI (SPI-4)
Local Time is: Sun Nov 18 19:39:10 2007 CET
Device supports SMART and is Enabled
Temperature Warning Enabled
SMART Health Status: OK
Current Drive Temperature: 47 C
Manufactured in week 05 of year
Current start stop count: 1074003968 times
Recommended maximum start stop count: 1124401151 times
Elements in grown defect list: 0
Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 0 0 0 0 0 0.000 0
write: 0 0 0 0 0 0.000 0
Non-medium error count: 1115
Last n error events log page
No self-tests have been logged
Long (extended) Self Test duration: 5760 seconds [96.0 minutes]
smartd
Smartd is a SMART Disk Monitoring Daemon (part of the smartmontools).
The data may be useful to detect defective or old hard drives.
I think this might be useful for people, that use custom build flags when building NetBSD. It is loosely based on ftp://ftp.pulsar-zone.net/building-netbsd.txt
Building sets
Sometimes builds fail because things doesn't fit in 2.88M when built with certain optimizations. So if it's your case, then do:
# ./build.sh build distribution sets
instead of
# ./build.sh release
you'll in have your sets in your RELEASEDIR (/usr/obj/releasedir).
Build kernels
Do this for each kernel you want to build install set for
# ./build.sh kernel=GENERIC releasekernel=GENERIC
Finishing up
Then you may install over network, or build install floppies/cdrom without optimization.
On a laptop you typically want to conserve battery power. With NetBSD you can set up sysutils/estd to configure the usage of Enhanced Speedstep technology. Estd monitors the ratio between load and idle states the OS records. Seting high- and low-water marks you can control the load above or below which to switch to the next higher or lower CPU frequency, respectively.
Example:
estd -l 40 -h 70 -b
If the percentage of idle states goes below 40% lower the CPU frequency to conserve battery. If it goes above 70% increase the frequency.
If you are using a multi-core CPU, estd will use the ratio between the sums over all CPUs of load versus idle states.
So, estd takes care of balancing battery power against performance.
To control temperature you can use envsys(4).
Put into /etc/envsys.conf:
coretemp0 {
sensor0 {
critical-max = 65C;
}
}
coretemp1 {
sensor0 {
critical-max = 65C;
}
}
The configuration is activated by issuing
envstat -c /etc/envsys.conf
Now the envsys system will trigger critical events whenever the temperature of one of these sensors goes above 65C. Such an event results in executing the script /etc/powerd/sensor_temperarture.
In this script I put:
.
.
.
case "${2}" in
normal)
case "${1}" in
coretemp*)
echo "estd_flags=\"-l 40 -h 70 -b\"" >/etc/rc.conf.d/estd
/etc/rc.d/estd restart
;;
esac
exit 0
;;
.
.
.
critical-over)
case "${1}" in
coretemp*)
/etc/rc.d/estd stop
sysctl -w machdep.est.frequency.target=600
;;
esac
exit 0
;;
.
.
.
It works nicely: the temperature sensor goes above 65C under heavy load, the envsys triggers a critical-over event, the script stops estd and sets the frequency to the lowest possible value on my CPU. Under heavy load it stays there long enough for the sensors to go down to about 52C-55C, before the normal event is triggered, estd is restarted and the heat rises again...
WebDAV is an abbreviation for Web-based Distributed Authoring and Versioning. Its commonly just called DAV. WebDAV is an open standard that technically enhances the HTTP/1.1 protocol providing mechanism like write support.
There are many ways to use WebDAV on NetBSD.
Contents
Using Gnome
Gnome supports by default webdav mounts.
Using KDE
You can use Konqueror to mount a remote DAV Server.
Using www/cadaver
Another possibility is to install the package www/cadaver. This is a command line tool to access DAV servers.
Using filesystems/fuse-wdfs
With fuse-wdfs you can mount webdav directories and access subversion repositories like a usual mountpoint. fuse(8) requires puffs to be built in your kernel.
See also
An open-source Windows driver is available here : FFS File System Driver for Windows. This driver allows you to mount a NetBSD partition using a Mount Manager (included with the driver). Currently, writing is not supported. The partition content is viewable under a windows explorer.
Note : Partitions are mounted the Windows way, so you have to check what are the right disk and partition numbers before mounting.
To find out what userland version you're running, you can do cat
/etc/release
.
On a daily snapshot of NetBSD 4.0 BETA2, that results in the following for example:
$ cat /etc/release
NetBSD 4.0_BETA2/amd64
Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006
The NetBSD Foundation, Inc. All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.
Build settings:
Build date Sun Aug 26 03:02:56 UTC 2007
Built by builds@wb28
BSDOBJDIR = '/usr/obj'
BSDSRCDIR = '/usr/src'
BUILDID = '200708250002Z'
DESTDIR = '/home/builds/ab/netbsd-4/amd64/200708250002Z-dest'
EXTERNAL_TOOLCHAIN = (undefined)
HAVE_GCC = '4'
HAVE_GDB = '5'
INSTALLWORLDDIR = (undefined)
KERNARCHDIR = 'arch/amd64'
KERNCONFDIR = '/home/builds/ab/netbsd-4/src/sys/arch/amd64/conf'
KERNOBJDIR = '/home/builds/ab/netbsd-4/amd64/200708250002Z-obj/home/builds/ab/netbsd-4/src/sys/arch/amd64/compile'
KERNSRCDIR = '/home/builds/ab/netbsd-4/src/sys'
MACHINE = 'amd64'
MACHINE_ARCH = 'x86_64'
MAKE = '/home/builds/ab/netbsd-4/amd64/200708250002Z-tools/bin/nbmake'
MAKECONF = '/home/builds/etc/make.conf'
MAKEFLAGS = ' -d e -m /home/builds/ab/netbsd-4/src/share/mk
-d e -m /home/builds/ab/netbsd-4/src/share/mk -j 1
-J 15,16 HOST_OSTYPE=NetBSD-4.0_BETA2-i386 MKOBJDIRS=yes
NOPOSTINSTALL=1 USETOOLS=yes _SRC_TOP_=/home/builds/ab/netbsd-4/src
_SRC_TOP_OBJ_=/home/builds/ab/netbsd-4/amd64/200708250002Z-obj/home/builds/ab/netbsd-4/src _THISDIR_=etc/'
MAKEOBJDIR = (undefined)
MAKEOBJDIRPREFIX = '/home/builds/ab/netbsd-4/amd64/200708250002Z-obj'
MAKEVERBOSE = '0'
MKBFD = 'yes'
MKCATPAGES = 'yes'
MKCRYPTO = 'yes'
MKCRYPTO_IDEA = 'no'
MKCRYPTO_MDC2 = 'no'
MKCRYPTO_RC5 = 'no'
MKCVS = 'yes'
MKDEBUG = 'no'
MKDEBUGLIB = 'no'
MKDOC = 'yes'
MKDYNAMICROOT = 'yes'
MKGCC = 'yes'
MKGCCCMDS = 'yes'
MKGDB = 'yes'
MKHESIOD = 'yes'
MKHOSTOBJ = (undefined)
MKHTML = 'yes'
MKIEEEFP = 'yes'
MKINET6 = 'yes'
MKINFO = 'yes'
MKIPFILTER = 'yes'
MKKERBEROS = 'yes'
MKLINKLIB = 'yes'
MKLINT = 'yes'
MKMAN = 'yes'
MKMANZ = 'no'
MKNLS = 'yes'
MKOBJ = 'yes'
MKOBJDIRS = 'yes'
MKPAM = 'yes'
MKPF = 'yes'
MKPIC = 'yes'
MKPICINSTALL = 'yes'
MKPICLIB = 'yes'
MKPOSTFIX = 'yes'
MKPROFILE = 'yes'
MKSHARE = 'yes'
MKSKEY = 'yes'
MKSOFTFLOAT = 'no'
MKSTATICLIB = 'yes'
MKUNPRIVED = 'yes'
MKUPDATE = 'yes'
MKUUCP = (undefined)
MKX11 = 'yes'
MKYP = 'yes'
NBUILDJOBS = (undefined)
NETBSDSRCDIR = '/home/builds/ab/netbsd-4/src'
NOCLEANDIR = (undefined)
NODISTRIBDIRS = (undefined)
NOINCLUDES = (undefined)
OBJMACHINE = (undefined)
RELEASEDIR = '/home/builds/ab/netbsd-4/amd64/200708250002Z-rlse'
TOOLCHAIN_MISSING = 'no'
TOOLDIR = '/home/builds/ab/netbsd-4/amd64/200708250002Z-tools'
USETOOLS = 'yes'
USR_OBJMACHINE = (undefined)
X11SRCDIR = '/home/builds/ab/netbsd-4/xsrc'
If you're running a mix and match of userland, kernel, and individual userland apps with cherry-picked CVS revisions, the ?ident program may be used to find the revisions of the individual source files a binary was composed of:
$ ident `which which`
/usr/bin/which:
$ NetBSD: crt0.c,v 1.4 2004/08/26 21:23:06 thorpej Exp $
$ NetBSD: whereis.c,v 1.18 2006/07/30 11:50:29 martin Exp $
Author: Svetoslav P. Chukov hydra@nhydra.org
Contents
Introduction
Used parts of BSD Sockets API
Functions list
int accept (int socket, struct sockaddr addr, socklen_t length_ptr)
This function is used to accept a connection request on the server socket socket. The accept function waits if there are no connections pending, unless the socket socket has nonblocking mode set. (You can use select to wait for a pending connection, with a nonblocking socket.) See File Status Flags, for information about nonblocking mode.
The addr and length-ptr arguments are used to return information about the name of the client socket that initiated the connection. See Socket Addresses, for information about the format of the information.
Accepting a connection does not make socket part of the connection. Instead, it creates a new socket which becomes connected. The normal return value of accept is the file descriptor for the new socket.
After accept, the original socket socket remains open and unconnected, and continues listening until you close it. You can accept further connections with socket by calling accept again.
If an error occurs, accept returns -1.
int bind (int socket, struct sockaddr *addr, socklen_t length)
The bind function assigns an address to the socket socket. The addr and length arguments specify the address; the detailed format of the address depends on the namespace. The first part of the address is always the format designator, which specifies a namespace, and says that the address is in the format of that namespace.
The return value is 0 on success and -1 on failure.
int connect (int socket, struct sockaddr *addr, socklen_t length)
The connect function initiates a connection from the socket with file descriptor socket to the socket whose address is specified by the addr and length arguments. (This socket is typically on another machine, and it must be already set up as a server.) See Socket Addresses, for information about how these arguments are interpreted.
Normally, connect waits until the server responds to the request before it returns. You can set nonblocking mode on the socket socket to make connect return immediately without waiting for the response. See File Status Flags, for information about nonblocking mode.
The normal return value from connect is 0. If an error occurs, connect returns -1.
uint16_t htons (uint16_t hostshort)
This function converts the uint16_t integer hostshort from host byte order to network byte order.
uint32_t htonl (uint32_t hostlong)
This function converts the uint32_t integer hostlong from host byte order to network byte order.
This is used for IPv4 Internet addresses.
int listen (int socket, unsigned int n)
The listen function enables the socket socket to accept connections, thus making it a server socket.
The argument n specifies the length of the queue for pending connections. When the queue fills, new clients attempting to connect fail with ECONNREFUSED until the server calls accept to accept a connection from the queue.
The listen function returns 0 on success and -1 on failure.
int read (int socket, void *buffer, size_t size)
If nonblocking mode is set for socket, and no data are available to be read, read fails immediately rather than waiting. See File Status Flags, for information about nonblocking mode.
This function returns the number of bytes received, or -1 on failure.
int send (int socket, void *buffer, size_t size, int flags)
The send function is like write, but with the additional flags flags. The possible values of flags are described in Socket Data Options.
This function returns the number of bytes transmitted, or -1 on failure. If the socket is nonblocking, then send (like write) can return after sending just part of the data. See File Status Flags, for information about nonblocking mode.
Note, however, that a successful return value merely indicates that the message has been sent without error, not necessarily that it has been received without error.
int shutdown (int socket, int how)
The shutdown function shuts down the connection of socket socket. The argument how specifies what action to perform:
0 - Stop receiving data for this socket. If further data arrives, reject it.
1 - Stop trying to transmit data from this socket. Discard any data waiting
to be sent. Stop looking for acknowledgement of data already sent; don't retransmit it if it is lost.
- 2 - Stop both reception and transmission.
The return value is 0 on success and -1 on failure.
int socket (int namespace, int style, int protocol)
This function creates a socket and specifies communication style style, which should be one of the socket styles listed in Communication Styles. The namespace argument specifies the namespace; it must be PF_LOCAL (see Local Namespace) or PF_INET (see Internet Namespace). protocol designates the specific protocol (see Socket Concepts); zero is usually right for protocol.
The return value from socket is the file descriptor for the new socket, or -1 in case of error.
The file descriptor returned by the socket function supports both read and write operations. However, like pipes, sockets do not support file positioning operations.
Structures and data types
struct sockaddr
The struct sockaddr type itself has the following members:
short int sa_family - This is the code for the address format of this address. It identifies the format of the data which follows.
char sa_data[14] - This is the actual socket address data, which is format-dependent. Its length also depends on the format, and may well be more than 14. The length 14 of sa_data is essentially arbitrary.
AF_LOCAL - This designates the address format that goes with the local namespace. (PF_LOCAL is the name of that namespace.) See Local Namespace Details, for information about this address format.
AF_UNIX - This is a synonym for AF_LOCAL. Although AF_LOCAL is mandated by POSIX.1g, AF_UNIX is portable to more systems. AF_UNIX was the traditional name stemming from BSD, so even most POSIX systems support it. It is also the name of choice in the Unix98 specification. (The same is true for PF_UNIX vs. PF_LOCAL).
AF_FILE - This is another synonym for AF_LOCAL, for compatibility. (PF_FILE is likewise a synonym for PF_LOCAL.)
AF_INET - This designates the address format that goes with the Internet namespace. (PF_INET is the name of that namespace.) See Internet Address Formats.
AF_INET6 - This is similar to AF_INET, but refers to the IPv6 protocol. (PF_INET6 is the name of the corresponding namespace.)
AF_UNSPEC This designates no particular address format. It is used only in rare cases, such as to clear out the default destination address of a "connected" datagram socket.
The corresponding namespace designator symbol PF_UNSPEC exists for completeness, but there is no reason to use it in a program.
struct sockaddr_in
This is the data type used to represent socket addresses in the Internet namespace. It has the following members:
sa_family_t sin_family - This identifies the address family or format of the socket address. You should store the value AF_INET in this member. See Socket Addresses.
struct in_addr sin_addr - This is the Internet address of the host machine. See Host Addresses, and Host Names, for how to get a value to store here.
unsigned short int sin_port - This is the port number.
When you call bind or getsockname, you should specify sizeof (struct sockaddr_in) as the length parameter if you are using an IPv4 Internet namespace socket address.
Programming steps in a simple examples
Create the socket
Before to use any socket you have to create it. This can be done via socket () function. There is two general types of sockets. A network socket and local socket. TAKE A LOOK ABOUT THE FOLLOWING LINES HOW TO CREATE THE SOCKET.
Network socket
Network socket is used about connecting to a network. Here you are some example how to do that:
int sock;
sock = socket ( PF_INET, SOCK_STREAM, IPPROTO_TCP );
The "PF_INET" argument specifies that the socket will be internet socket. Let's take a look about any of the arguments.
PF_INET - specifies the socket type (in our case - internet socket) SOCK_STREAM - specifies that the connection will be via stream. IPPROTO_TCP - the used protocol will be TCP.
In the above example we make a internet socket with TCP packets, simple and easy ...
Local socket
Local socket is used about local connecting. It is used in Interprocess communication Here you are some example how to do that:
int sock;
sock = socket ( PF_LOCAL, SOCK_DGRAM, 0 );
The "PF_INET" argument specifies that the socket will be internet socket. Let's take a look about any of the arguments.
PF_LOCAL - specifies the socket type (in our case - local socket) SOCK_DGRAM - specifies that the connection will be via diagrams. 0 - no protocol available.
Initialize the socket structure and make a socket address
After creating of the socket we have to initialize the socket structure to make it available. Well, here is how to do that:
struct sockaddr_in ServAddr;
const char * servIP;
int ServPort;
memset(&ServAddr, 0, sizeof(ServAddr));
ServAddr.sin_family = AF_INET;
ServAddr.sin_addr.s_addr = htonl(INADDR_ANY);
ServAddr.sin_port = htons ( port );
This example create internet socket and accepts ALL CONNECTIONS from ANY ADDRESS. Let's have a closer look about the above lines.
This line make memset for ServAddr structure. This structure holds the server address and all information needed about the socket work.
memset(&ServAddr, 0, sizeof(ServAddr));
This line put the socket family. So, as i said above the socket can be internet and local. It this example it is internet, because of that we put AF_INET.
ServAddr.sin_family = AF_INET;
The following line specifies that we accept connections from ANY address.
ServAddr.sin_addr.s_addr = htonl(INADDR_ANY);
s_addr is variable that holds the information about the address we agree to accept. So, in this case i put INADDR_ANY because i would like to accept connections from any internet address. This case is used about server example. In a client example i could NOT accept connections from ANY ADDRESS.
After all above stuffs the next important thing is the PORT. all internet sockets need a Input-Output PORT to make a connection. You can take any port that you want the required condition is that port to be free. So, in other words it must be available for us.
NOTE: Some ports need ROOT rights to opened. If this port is compromised it put entire Operating System on risk.
So, i suggest using a NONE ROOT PORTS that are pretty much available in the computer. These ports start from 1500 - 65536. Well, you have all these ports available for ordinary NONE ROOT users.
Here is the example about the port initializing.
ServAddr.sin_port = htons ( port );
Let me describe the above line. So, the "port" is a "integer variable". You can do it in this way:
ServAddr.sin_port = htons ( 10203 );
This above line will open port 10203 for connections.
Client Side specific options
Connect to the server
The client can be connected to the server via connect() function. That function takes our current socket ( it is a client in that case ) and the server socket and connect them both. Here is the example code:
connect(sock, (struct sockaddr *) &ServAddr, sizeof(ServAddr))
To be everything good and everybody to be happy do the following code:
if (connect(sock, (struct sockaddr *) &ServAddr, sizeof(ServAddr)) < 0) {
printf("connect() failed\n");
}
This code make sure that we have available connection. If the connection failed then the connect function will return -1 and the logic will print the error message "connect() failed". If the function succeeded there is an available and ready for using connection to the server
Server Side specific options
Binding the socket
After the all preparations the next important step is socket binding. This will bind the socket address and the created socket. So, the address will be connected to the socket. If you miss this stuff you can not use your socket. Because it will have no address to access it. This is like the building and the street number. If you don't know street number you could not find the building you want...
Well, here is the example how to do that:
bind ( sock, ( struct sockaddr * ) &ServAddr, sizeof ( ServAddr ) );
The bind function is very important because it will make your socket available for using. So, better way to do above is to import some processing logic to make sure yourself that is really available socket.
This will do binding of the socket but will check for errors and if the binding failed ... the logic will do exit.
if ( bind ( sock, ( struct sockaddr * ) &ServAddr, sizeof ( ServAddr ) ) < 0 ) {
perror ( "bind" );
exit ( EXIT_FAILURE );
}
Listening for incoming connections
listen(sock, MAXPENDING);
The second argument specifies the length of the queue for pending connections. So, if you want to use 5 pending connections you can do it in this way:
listen (sock, 5);
This marks the socket is listening and ready to accept incoming connections.
Accepting connections
The accepting of the connections goes throw some steps... First of all it needs to make a structure with type sockaddr_in to hold client address. After that have to make a variable to hold the client length. Then i put the length of the client to this variable. So, i make sure that there is enough data
Here is the example code:
struct sockaddr_in ClntAddr;
unsigned int clntLen;
clntLen = sizeof(ClntAddr);
clntSock = accept(servSock, (struct sockaddr *) &ClntAddr, &clntLen))
Transferring data
Sending
When the sockets are connected the next step is just .... using of this connection. Sending of data can be established via send() or write() functions.
Here is the example:
send(sock, "\n", StringLen, 0);
This is simple example how to send a "\n" character to the server. We can send any information. Characters, symbols, any data that have to be sent...
Let me describe the above example... So, the first argument take a socket variable. The second argument take the data that will be sent.. and the 3rd argument is an integer variable that will specify how long is the data sent. The last argument is for additional options, if you don't need that just put 0 - like me.
NOTE: The variable "sock" is a socket that will be used. But this is your socket, not the socket of the server... I think this can confuse someone. So, i assume your make a network client and for that reason you make a client socket. That's good but throw this client socket you do all the communications. Have a closer look and see the difference between these 2 sockets. You use the client socket not the server one.. the server socket is for the server.
Just wanted to be clear because some people make mistake when they make a server and client sides.
Receiving
When some data is sent from the other side someone wait to receive it... So, this is not so hard. Here is a simple example:
recv(sock, recvBuffer, 256)
The receiving is like sending - simple and easy. The first argument takes a socket variable, the second variable takes the BUFFER for storing incoming data and the last one takes the integer variable that specifies the length of the incoming data.
So, when you put 256 the read() function will read 256 bytes from the incoming data and it will exit when the data is more or find the symbol "END OF DATA".
IMPORTANT: Reserve BUFFER as the same or larger of the length you specify as read data. DO NOT specify buffer that is smaller of the read data. If you do that you will get "SEGMENTATION FAULT" error message and YOUR PROGRAM WILL TERMINATE.
NOTE: The variable "sock" is a socket that will be used. But this is your socket, not the socket of the server... I think this can confuse someone. So, i assume your make a network client and for that reason you make a client socket. That's good but throw this client socket you do all the communications. Have a closer look and see the difference between these 2 sockets. You use the client socket not the server one.. the server socket is for the server.
Just wanted to be clear because some people make mistake when they make a server and client sides.
Advanced tricks
There is a huge amount of advanced tricks in the BSD sockets... This is the main tricks:
- Receiving data: Data receiving is important part of network socket communications. There is a issue with the buffer when you receive some data via network if you receive data shorter than the length of buffer data. You will receive some $%#$% data... so this is bad. Because of that you have to fix the received buffer and the sent data to be exactly the same length. BUT the receiving buffer must larger +1 character than the sent data. This is because of the last character \0 for terminating the data. So, if you send this : "some data to be send" this is 20 length message. Then buffer for the receiving MUST BE 20 + 1. This is because you send 20 characters, but you receive 20+1 characters. The last one is the terminating character.
Full example source code
network.h
#include <netinet/in.h>
#include <netdb.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <stdlib.h>
#include <stdio.h>
#define ADRESS_PORT 10203
#define ADRESS_IP "127.0.0.1"
#define MAXPENDING 5
#define BUFFSIZE 21
#define SERVER_SOCKET 1
#define CLIENT_SOCKET 0
#define TRUE 1
#define FALSE 0
#define START 11
#define DIVIDER ":"
network.c
#include "network.h"
int make_socket ( uint16_t port, int type, const char * server_IP )
{
int sock;
struct hostent * hostinfo = NULL;
struct sockaddr_in server_address;
/* Create the socket. */
sock = socket ( PF_INET, SOCK_STREAM, IPPROTO_TCP );
if (sock < 0) {
perror ( "socket" );
exit ( 1 );
}
/* Give the socket a name. */
memset(&server_address, 0, sizeof(server_address));
server_address.sin_family = AF_INET;
server_address.sin_port = htons ( port );
if ( type == SERVER_SOCKET ) {
server_address.sin_addr.s_addr = htonl(INADDR_ANY);
if ( bind ( sock, ( struct sockaddr * ) &server_address, sizeof ( server_address ) ) < 0 ) {
perror ( "bind" );
exit ( 1 );
}
if ( listen(sock, MAXPENDING) < 0 ) {
printf("listen() failed");
}
} else if ( type == CLIENT_SOCKET ) {
server_address.sin_addr.s_addr = inet_addr(server_IP);
/* Establish the connection to the server */
if (connect(sock, (struct sockaddr *) &server_address, sizeof(server_address)) < 0) {
printf("connect() failed\n");
}
}
return sock;
}
void close_socket (int socket)
{
close (socket);
}
char * clean_data( const char * data )
{
int count;
char * ptr_data = NULL;
char * result_data = NULL;
char * temp_ptr_data = NULL;
int len;
int write_info, ifone;
ptr_data = strstr (data, DIVIDER);
ptr_data =& ptr_data[strlen(DIVIDER)];
temp_ptr_data = malloc ( strlen (ptr_data) );
strcpy (temp_ptr_data, ptr_data);
result_data = (char *) strsep (&temp_ptr_data, DIVIDER);
printf ("%i, %i, %s", strlen (data), strlen (ptr_data), result_data);
return result_data;
}
void send_data ( int socket, const char * data )
{
int sent_bytes, all_sent_bytes;
int err_status;
int sendstrlen;
sendstrlen = strlen ( data );
all_sent_bytes = 0;
sent_bytes = send ( socket, data, sendstrlen, 0 );
all_sent_bytes = all_sent_bytes + sent_bytes;
printf ("\t !!! Sent data: %s --- \n", data);
}
server.c
#include "network.h"
int accept_connection(int server_socket)
{
int client_socket; /* Socket descriptor for client */
struct sockaddr_in client_address; /* Client address */
unsigned int client_length; /* Length of client address data structure */
/* Set the size of the in-out parameter */
client_length = sizeof(client_address);
/* Wait for a client to connect */
if ((client_socket = accept(server_socket, (struct sockaddr *) &client_address, &client_length)) < 0) {
printf("accept() failed");
}
/* client_socket is connected to a client! */
printf("Handling client %s\n", inet_ntoa(client_address.sin_addr));
return client_socket;
}
void handle_client (int client_socket)
{
char buffer [BUFFSIZE]; /* Buffer for incomming data */
int msg_size; /* Size of received message */
int bytes, all_bytes;
do {
alarm (60);
msg_size = read (client_socket, buffer, BUFFSIZE);
alarm (0);
if ( msg_size <= 0 ) {
printf ( " %i ", msg_size );
printf ( "End of data\n" );
}
} while ( msg_size > 0 );
printf ("Data received: %s", buffer);
bytes = 0;
}
int main()
{
int clnt_sock;
int sock = make_socket(ADRESS_PORT, SERVER_SOCKET, "none");
clnt_sock = accept_connection (sock);
handle_client(clnt_sock);
close_socket(sock);
return 0;
}
client.c
#include "network.h"
int main()
{
int sock = make_socket(ADRESS_PORT, CLIENT_SOCKET, "10.35.43.41");
send_data (sock, "Some data to be sent");
close_socket(sock);
return 0;
}
How to compile it?
Compile via cc:
cc network.c server.c -o server_example (server)
cc network.c client.c -o client_example (client)
To compile and use the sockets just have to include the main "include" files. If you don't know which are they ... here you are:
Just import these lines in the beginning of your program .h files.
#include <netinet/in.h>
#include <netdb.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <stdlib.h>
#include <stdio.h>
Include these and should have no problems...
Description
BSD sockets are the base part of the networks and internet. The entire HOW-TO is specified about the BSD socket programming but it could be used by other programmers too. Well, the sockets are the same in all operating systems.
In the general case this HOW-TO will describe about Sockets programming in all NIX-like operating systems. This include GNU/Linux, BSD, OpenSolaris and others.
Contents
Author's note
This document really describes (what I remember of installing) my system, with tidbits I've forgotten from various sources on the net. I can't guarantee that following this document you'll get a working system, but I hope it will provide some insights into how the thing is supposed to work.
Staffan Thom´en duck@shangtai.net
Server setup
First things first, you'll need to set up an openldap server somewhere, this is fairly straightforward, as it's available in pkgsrc. The tricky bit is really configuring the ACL:s, since the openldap logs are incredibly hard to read. Generally it's probably a good idea to firewall it from outside and worry about the ACL setup later if you want to do things like let other departments to see your users or let the public see contact information for example.
An example config file is included in the package (${LOCALBASE}/etc/opeldap/slapd.conf), and the only thing that really has to be added is to include some schemas for user authentication:
cosine.schema
inetorgperson.schema
nis.schema
These are (in pkgsrc-2014Q1) installed in ${LOCALBASE}/share/examples/openldap/schema, and can just be included from there, and tells the server which record keys (as in key-value pairs) it shall accept.
And that really is it for the server bit. Next comes testing it out with a few ldap commands.
The basic commands of talking directly with the ldap database are ldapadd, ldapmodify and ldapsearch. These are in the openldap-client package, so you won't have to install the entire server on a client machine.
Options you'll be using alot like -b (base) and -H (host URI) can conveninently be stuck in a client configuration file, ${PKG_SYSCONFBASE}/openldap/ldap.conf, which will save you time and aggravation from having to type them all the time.
To talk to your ldap server, try running ldapsearch;
% ldapsearch -H ldap://my.server/
This really means dump everything, but since we've nothing in the database it will respond with an error.
To set this database up for user authentication, we'll need to lay down some structure. LDAP is generally a hierachial database of records with key-value pairs. We'll first need to tell it about our organisation and then add a user.
Here we'll be using ldapadd, which reads a format called ldif. It is a flat text format that looks something like this:
dn: cn=example,dc=org
objectClass: dcObject
objectClass: organization
objectClass: top
o: Example Organisation
dc: example
dn: ou=groups,dc=example,dc=org
objectClass: top
objectClass: organizationalUnit
ou: groups
dn: ou=people,dc=example,dc=org
objectClass: top
objectClass: organizationalUnit
ou: people
The text above define three records, they start with a distinguished name of the record (dn:), which is a unique identifier for the record.
"cn=example,dc=org" is the root of this organisation, with a common name (cn) example and a domain component (dc) of org. Next come the objectClass lines which tells us that this is domain component object, an organisation object and a top-level object. We then have an organisation (o:) line which is a descriptive text and finally a domain component line (dc:) which is the stored value for the dc (same as in the distinguished name).
Following this are two records which define something called in ldap terms organisational units, and as you see from the dn:, essentially two branches of the main tree. They are here to be used for the user groups (yes, like /etc/groups) and the actual users.
If you want you can just stick all of this in one file (even the user below) and add it with ldapadd -f file.ldif, this will create the initial structure of your database.
Adding a group and then a user user is no more difficult, you just have to fill out the right fields.
dn: cn=ldapusers,ou=groups,dc=example,dc=org
objectClass: top
objectClass: posixGroup
cn: ldapusers
gidNumber: 101
memberUid: bill
memberUid: george
A group named ldapusers with the number 101, and the secondary users bill and george (these are of course not required).
dn: uid=test,ou=people,dc=example,dc=org
objectClass: top
objectClass: posixAccount
objectClass: inetOrgPerson
uid: test
uidNumber: 2000
gidNumber: 101
o: Example Organisation
cn: Test User
givenName: Test
sn: User
gecos: Test User,3b,+358800128128,+35801234567
loginShell: /bin/ksh
homeDirectory: /home/test
mail: test@example.org
displayName: El Magnifico Test User
A user with the uid test, belonging to group ldapusers (101); o: is the same as the root record above and the others apart from sn (surname) is fairly obvious. The GECOS field contains comma separated values, apparently it's pulled straight into the client system.
The fields actually required by the schemes are:
uid
uidNumber
gidNumber
cn
sn
homeDirectory
LDAP can store multiple roots and each user entry for example can be more than just the login information, as above it also mentions email, phone numbers and so on for our test user, and it can also include binary data like a mugshot and them playing the corporate theme on banjo. As far as authentication is concerned, we've got what we want though.
So far so good, this should not cause much trouble to set up, I believe I've covered everything required; the thing I had most problem with in relation to the database itself was that it was so unstructured, you have to provide all the structure yourself.
Client Setup
In order to log in on a NetBSD system we need to provide two things, a way for the system to authenticate you and a way for it to find out what your group, user id, etc. is.
The first part of this, authentication is taken care of by PAM (or possibly by some BSD auth scheme, but this is not yet implemented as far as I know.)
The second part is done via libc and the NSS subsystem.
In order to do this, we need to provide some plugins for either system that enables LDAP support in them. The plugins are in pkgsrc and are called
security/pam-ldap
and
databases/nss_ldap
The latest version of these packages (pkgrsc-2014Q2 and newer) will automatically created the necessary symbolic links in /usr/lib and /usr/lib/security to be able to use these modules. For older version you will have create a symbolic link from /usr/lib/nss_ldap.so.0 to ${LOCALBASE}/lib/nss_ldap.so.1 and from /usr/lib/security/pam_ldap.so to ${LOCALBASE}/lib/security/pam_ldap.so
Before we go any further, I'd like to introduce some security into the mix; up til now we've talked to the ldap server without any limitations and what's called anonymous binds, i.e. not logged in.
XXX can anonymous binds actually write to a db without ACLs?
This is an ldap user, just like the test user outlined above, since the ldap database can authenticate against itself. (You don't have to, but I haven't explored the other possibilities such as SASL)
So we'll create a user called nss
dn: cn=nss,dc=example,dc=org
objectClass: top
objectClass: inetOrgPerson
o: Example Organisation
cn: nss
sn: manager
We'll attach a password so that not just anyone can connect, and also change our LDAP configuration slightly so that we use encrypted passwords.
userPassword: {SSHA}w5aocfmGgZqq3h8AjvaZiw8WKdrRTjTi
To generate this password I use (bundled with openldap-server) slapdpasswd
% slappasswd -h "{SSHA}"
And in slapd.conf add
passsword-hash {SSHA}
And of course you'll need to change the secret for the rootpw into something encrypted.
Note that the traffic between the ldap client and the server is still not (that is if you've been following this document) encrypted so this might be best to perform locally.
This user will be used for ACL filtering later.
Next we'll need to configure the LDAP part of the plugins, a convenience here is that since both the plugins are made by the same people, they can share a configuration file. They will look for ${PKG_SYSCONFBASE}/nss_ldap.conf and ${PKG_SYSCONFBASE}/pam_ldap.conf, but linking them to the same file will let you have just one place to configure (and protect for your ldap user password)
The important bits in this file is the base setting and the uri for your ldap server:
base dc=example,dc=org
uri ldap://my.server/
Next we need to tell it who it should contact the ldap database as:
binddn cn=nss,dc=example,dc=org
And if you want to be able to change passwords as root without knowing the user's password in advance (with passwd, using ldapmodify you can still just set it, if you bind with the credentials to do it (see ACLs).)
I haven't mentioned this user before, it's the database's root user, allowed to do anything;
rootbinddn cn=root,dc=example,dc=org
The password for this will not be in this file, but in a separate file called ${PKG_SYSCONFBASE}/nss_ldap.secret or for pam; ${PKG_SYSCONFBASE}/pam_ldap.secret
- ) not sure about this, but my system has both, linked together
Finally we will set the password exchange method to exop;
pam_password exop
This is the OpenLDAP extended method and while the passwords will still be sent in the clear, they are encrypted with the database's scheme in the database.
So while you can use ldapsearch to get the data (though ACLs can prevent this if properly set up) it will still only be a hash.
That's it for configuring the plugins so far.
NSS
The next change we will need to do is to enable the ldap module in nsswitch.conf:
Change
group: files
...
passwd: files
To
group: files ldap
...
passwd: files ldap
...
netgroup: files ldap
This will enable you to have local accounts as well as LDAP users. You could test this out now, by running the getent program;
% getent group
Will present you with a list of all the groups in the system, with the ldap group 'ldapusers' we created earlier tacked on to the end of the list.
% getent passwd
And this will show you the user list, with the ldap user 'test' at the end.
PAM
PAM keeps it's configuration files in /etc/pam.d/, these are divided into individual files per each pam service in the system; most are just including system but some need special attention.
On my system I have the following changes from the stock netbsd setup:
/etc/pam.d/sshd
#
# PAM configuration for the "sshd" service
#
# auth
auth required pam_nologin.so no_warn
auth sufficient pam_krb5.so no_warn try_first_pass
auth sufficient pam_ldap.so no_warn try_first_pass
# pam_ssh has potential security risks. See pam_ssh(8).
#auth sufficient pam_ssh.so no_warn try_first_pass
auth required pam_unix.so no_warn try_first_pass
# account
account required pam_krb5.so
account required pam_login_access.so
account required pam_unix.so
# session
# pam_ssh has potential security risks. See pam_ssh(8).
#session optional pam_ssh.so
session required pam_permit.so
# password
password sufficient pam_krb5.so no_warn try_first_pass
password sufficient pam_ldap.so no_warn try_first_pass
password required pam_unix.so no_warn try_first_pass
/etc/pam.d/su
#
# PAM configuration for the "su" service
#
# auth
auth sufficient pam_rootok.so no_warn
auth sufficient pam_self.so no_warn
auth sufficient pam_ksu.so no_warn try_first_pass
#auth sufficient pam_group.so no_warn group=rootauth root_only fail_safe authenticate
auth requisite pam_group.so no_warn group=wheel root_only fail_safe
auth sufficient pam_ldap.so no_warn try_first_pass
auth required pam_unix.so no_warn try_first_pass nullok
# account
account required pam_login_access.so
account include system
# session
session required pam_permit.so
/etc/pam.d/system
# $NetBSD: openldap_authentication_on_netbsd.mdwn,v 1.5 2014/06/04 20:57:12 tron Exp $
#
# System-wide defaults
#
# auth
auth sufficient pam_krb5.so no_warn try_first_pass
auth sufficient pam_ldap.so no_warn try_first_pass
auth required pam_unix.so no_warn try_first_pass nullok
# account
account required pam_krb5.so
account required pam_unix.so
# session
session required pam_lastlog.so no_fail no_nested
# password
password sufficient pam_krb5.so no_warn try_first_pass
password sufficient pam_ldap.so no_warn try_first_pass
password sufficient pam_unix.so no_warn try_first_pass
password required pam_deny.so prelim_ignore
The last bit here with pam_deny, is a bit special, it is what enables you to change passwords for both local users and those in the ldap database with the passwd command. pam_deny with the prelim_ignore flag is needed, else pam will will fail in the preliminary phase (it is always run trough twice) and you will not be able to change passwords.
In order to use this you need to patch your pam_deny (/usr/src/lib/libpam/modules/pam_deny.c) with the patch by Edgar Fuß ef@math.uni-bonn.de:
http://mail-index.netbsd.org/tech-userlevel/2007/08/29/0001.html
The original message describing the problem is here:
http://mail-index.netbsd.org/tech-userlevel/2007/08/25/0006.html
/etc/pam.d/sudo
#
# PAM configuration for the "sudo" service
#
# auth
auth sufficient pam_ldap.so no_warn try_first_pass
auth required pam_unix.so no_warn try_first_pass nullok use_uid
# account
account required pam_login_access.so
account include system
# session
session required pam_permit.so
This file is only required if you want to use the "sudo" package from "pkgsrc". You will have to compile this package manually with "PKG_OPTIONS.sudo" set to "pam" because it doesn't support PAM by default.
Securing your system
As far as the document goes now, this setup is unprotected in that anyone listening in to the packets travelling trough your network would be able to find the unencrypted messages of your ldap users. Not a happy thought.
So we'll want to enable SSL encryption of the traffic between your clients and the server.
In order to do this you will need to create an SSL certificate for your server and also distribute it to the client machines, so that they will be able to certify the authenticity of the server.
We'll also need to configure slapd to use it, I put my keys in the /etc/openssl hierachy, since it seemed made for it.
TLSCipherSuite HIGH
TLSCertificateFile /etc/openssl/certs/openldap.pem
TLSCertificateKeyFile /etc/openssl/private/openldap.pem
TLSCACertificateFile /etc/openssl/certs/openldap.pem
Next we'll need to change the clients setup so that they will use encryption. Enable ssl in ${PKG_SYSCONFBASE}/{nss,pam}ldap.conf;
ssl start_tls
Next if you're like me using the ${PKG_SYSCONFBASE}/openldap/ldap.conf file, telling the client libs where to find the cert file is enough, we don't have to put it in the nss/pam config:
TLS_CACERT /etc/openssl/certs/openldap.pem
If you can still use getent, encryption is happening. You can of course also tcpdump your network traffic to see what's going on.
ACL
I left access control lists of the server to the last, because they are the easiest to get wrong and often cause problems that you might attribute to other things in the various setups.
The syntax is fairly straightforward;
acceess to [something] by [someone] [access]
The order is important; if something matches, later tests will not be run.
The one I use looks like this:
#
# Protect passwords from prying eyes
#
access to attrs=userPassword
by dn="cn=nss,dc=example,dc=org" write
by anonymous auth
by self write
by * none
#
# set read-only attributes
#
access to attrs=uidNumber,gidNumber,uid,homeDirectory
by dn="cn=nss,dc=example,dc=org" write
by self read
by * read
#
# For all else, let the user edit his own entry and everyone else watch
#
access to *
by dn="cn=nss,dc=example,dc=org" write
by self write
by * read
Note that access to the user password can be set to auth; so that the database can authenticate a user without letting them see the password hash using an anonymous bind.
Contents
Assembly?
Assembly is the programming language that gives direct access to the instructions and registers of the processor. A program called the assembler compiles assembly language into machine code. NetBSD installs the GNU assembler "gas" into /usr/bin/as and this program assembles for the host processor architecture.
A higher-level compiler like "gcc" acts as a preprocessor to the assembler, by translating code from C (or other language) to assembler. Just run cc -S yourfile.c and look at the output yourfile.s to see assembly code. A higher-level compiler can probably write better assembly code than a human programmer who knows assembly language.
There remain a few reasons to use assembly language. For example:
- You need direct access to processor registers (for example, to set the stack pointer).
- You need direct access to processor instructions (like for vector arithmetic or for atomic operations).
- You want to improve or fix the higher-level compiler, assembler, or linker.
- You want to optimize code, because your higher-level compiler was not good enough.
- You want to learn assembly language.
i386
i386 architecture takes its name from the Intel 386, the first x86 processor to have a 32-bit mode. Other names for this architecture are:
- IA-32, which means Intel Architecture, 32 bit.
- x86, which can mean the 32-bit mode or the ancient 16-bit mode.
The i386 assembly language is either AT&T syntax or Intel syntax. Most programmers seem to prefer the Intel syntax.
nasm
NASM (the Netwide Assembler) is an x86 assembler that uses the Intel syntax. It is easily available via devel/nasm.
You can also use devel/yasm with devel/nasm syntax.
Hello world, NetBSD/i386
; Hello world, NetBSD/i386 4.0
section .note.netbsd.ident progbits alloc noexec nowrite
dd 0x00000007 ; Name size
dd 0x00000004 ; Desc size
dd 0x00000001 ; value 0x01
db "NetBSD", 0x00, 0x00 ; "NetBSD\0\0"
db 400000003 ; __NetBSD_Version__ (please see <sys/param.h>)
section .data
msg db "Hello world!", 0x0a ; "Hello world\n"
len equ $ - msg
section .text
global _start
_start:
; write()
mov eax, 0x04 ; SYS_write
push len ; write(..., size_t nbytes)
push msg ; write(..., const void *buf, ...)
push 0x01 ; write(int fd, ...)
push 0x00
int 0x80
pop ebx
; exit()
mov eax, 0x01 ; SYS_exit
push 0x00 ; exit(int status)
push 0x00
int 0x80
How to compile and link
To use the above code you need to compile and then link it:
$ nasm -f elf hello.asm
$ ld -o hello hello.o
$ ./hello
Hello world!
gas
the portable GNU assembler
It uses AT&T syntax and is designed after the 4.2BSD assembler. You can use it on many CPU architectures.
Example:
.section ".note.netbsd.ident", "a"
.long 2f-1f
.long 4f-3f
.long 1
1: .asciz "NetBSD"
2: .p2align 2
3: .long 400000000
4: .p2align 2
.section .data
data_items: # this is an array
.long 3,39,41,21,42,34,42,23,38,37,15,37,16,17,18,25,23,12,31,2
.set DATASIZE, ( . - data_items) / 4 - 1
.section .text
.globl _start
_start:
movl $0, %edi # zero the index register
movl $DATASIZE, %ecx # set ecx to number of items
movl data_items(,%ecx,4), %eax # load first item
movl %eax, %ebx # its the biggest atm
main_loop:
decl %ecx # decrement counter
movl data_items(,%ecx,4), %eax # step to next element
cmpl %eax, %ebx # is it greater?
cmovll %eax, %ebx # set ebx to greater if its less
than cur. num.
jecxz end_prog # if we are at item 0 end iterat
ion
jmp main_loop # again!
end_prog:
pushl %ebx # return largest number
pushl %ebx # BSD-ism (has to push twice?)
movl $1, %eax # call exit
int $0x80 # kernel
ret
powerpc
PowerPC processors appear inside multiple different hardware platforms; NetBSD has at least 11 ports, see ?Platforms. The easiest way to obtain a PowerPC machine is probably to acquire a used Macintosh, choosing from among the supported models for NetBSD/macppc.
PowerPC processors have 32-bit registers and pointers and use big-endian byte order.
- A very few boards (not with NetBSD) run the PowerPC in little-endian mode to match the hardware.
- A few PowerPC processors also have a 64-bit mode. NetBSD 5.0 will support some Apple G5 machines with these processors, but only in 32-bit mode (see ppcg5 project).
gas
Here is an example of a program for gas:
## factorial.s
## This program is in the public domain and has no copyright.
###
## This is an example of an assembly program for NetBSD/powerpc.
## It computes the factorial of NUMBER using unsigned 32-bit integers
## and prints the answer to standard output.
.set NUMBER, 10
.section ".note.netbsd.ident", "a"
# ELF note to identify me as a native NetBSD program
# type = 0x01, desc = __NetBSD_Version__ from <sys/param.h>
##
.int 7 # length of name
.int 4 # length of desc
.int 0x01 # type
.ascii "NetBSD\0" # name
.ascii "\0" # padding
.int 500000003 # desc
.section ".data"
decbuffer:
.fill 16 # buffer for decimal ASCII
decbufend:
.ascii "\n" # newline at end of ASCII
.section ".text"
# PowerPC instructions need an alignment of 4 bytes
.balign 4
.globl _start
.type _start, @function
_start:
# compute factorial in %r31
li %r0, NUMBER
mtctr %r0 # ctr = number
li %r31, 1 # %r31 = factorial
li %r30, 1 # %r30 = next factor
factorial_loop:
mullw %r31, %r31, %r30 # multiply %r31 by next factor
addi %r30, %r30, 1 # increment next factor
bdnz+ factorial_loop # loop ctr times
# prepare to convert factorial %r31 to ASCII.
lis %r9, decbufend@ha
la %r4, decbufend@l(%r9) # %r4 = decbufend
lis %r8, decbuffer@ha
la %r29, decbuffer@l(%r8) # %r29 = decbuffer
li %r5, 1 # %r5 = length of ASCII
# Each loop iteration divides %r31 by 10 and writes digit to
# position %r4. Formula (suggested by gcc) to divide by 10,
# 0xcccccccd
# is to multiply by ----------- = 0.100000000005821
# 0x800000000
# which is to multiply by 0xcccccccd, then shift right 35.
##
.set numerator, 0xcccccccd
lis %r9, numerator@ha
la %r28, numerator@l(%r9) # %r28 = numerator
decloop:
cmpw %r29, %r4 # start of buffer <=> position
beq- buffer_overflow
# begin %r9 = (%r31 / 10)
mulhwu %r9, %r31, %r28 # %r9 = ((%r31 * %r28) >> 32)
addi %r4, %r4, -1 # move %r4 to next position
srwi %r9, %r9, 3 # %r9 = (%r9 >> 3) = %r31 / 10
mulli %r8, %r9, 10 # %r8 = (%r31 / 10) * 10
sub %r27, %r31, %r8 # %r27 = %r31 % 10 = digit
addi %r27, %r27, '0 # convert digit to ASCII
addi %r5, %r5, 1 # count this ASCII digit
stb %r27, 0(%r4) # write ASCII digit to buffer
mr. %r31, %r9 # %r31 /= 10, %r31 <=> 0
bne+ decloop # loop until %r31 == 0
# FALLTHROUGH
buffer_overflow:
# write(2) our factorial to standard output
li %r0, 4 # SYS_write from <sys/syscall.h>
li %r3, 1 # standard output
## %r4 # buffer
## %r5 # size of buffer
sc
# exit(2)
li %r0, 1 # SYS_exit from <sys/syscall.h>
li %r3, 0 # exit status
sc
.size _start, . - _start
With a NetBSD/powerpc system, you can run this program using
$ as -o factorial.o factorial.s
$ ld -o factorial factorial.o
$ ./factorial
3628800
$
Useful Documents
To learn about PowerPC assembly language, here are two documents to start with.
- IBM developerWorks. PowerPC Assembly. This is a very good introduction to PowerPC assembly. It provides and explains the Hello World example (but using a Linux system call).
SunSoft and IBM. System V Application Binary Interface, PowerPC Processor Supplement (PDF file, hosted by Linux Foundation). This is the specification for 32-bit PowerPC code in ELF systems. It establishes %r1 as the stack pointer and describes the stack layout. It explains the C calling conventions, how to pass arguments to and return values from C functions, how to align data structures, and which registers to save to the stack.
- NetBSD, Linux and OpenBSD (and FreeBSD?) all use ELF with PowerPC and all follow this specification, with a few deviations and extensions.
Wiki Pages
- ELF Executables for PowerPC. This introduces assembly language with a commented example.
Contents
Requirements
You will need a mobile phone with GPRS and Bluetooth, a Bluetooth device and a NetBSD system that supports Bluetooth (4.0 and above).
In this example, we are using a Nokia 6230i phone, a Broadcom USB dongle and NetBSD 4.9.11.
Setting up pppd
We need to create some pppd options and chat scripts, first create the directories
# mkdir -p /etc/ppp/peers
Create a /etc/ppp/options file containing
#
# default options file for [pppd(8)](//man.NetBSD.org/pppd.8)
#
57600
crtscts
local
defaultroute
usepeerdns
noipdefault
nodetach
and a /etc/ppp/chat.gsm file containing
#
# Chat script to dial out with GSM phone
#
ABORT "BUSY"
ABORT "NO CARRIER"
ABORT "DELAYED"
ABORT "NO DIALTONE"
ABORT "VOICE"
TIMEOUT 10
"" AT
OK-AT-OK AT&F
OK AT+CGDCONT=1,"IP","\U"
TIMEOUT 60
OK ATDT\T
CONNECT \c
Create a /etc/ppp/peers/gprs file containing
#
# pppd(8) options file for GPRS
#
# The Access Point Name (APN) used by your GSM Operator may need
# to be different from the "internet" used below.
#
pty "rfcomm_sppd -a phone -d ubt0 -s DUN -m encrypt"
connect "/usr/sbin/chat -V -f /etc/ppp/chat.gsm -U internet -T *99#"
noccp
Configuring Bluetooth
First, activate Bluetooth on your phone, on the Nokia 6230i as follows
Menu > Settings > Connectivity > Bluetooth > Bluetooth settings > My phone's name
Choose a name for your device, for this example I will use "My Nokia"
Menu > Settings > Connectivity > Bluetooth > Bluetooth settings > My phone's visbility
Choose "Shown to all"
Menu > Settings > Connectivity > Bluetooth > Bluetooth >
Choose "Bluetooth on"
Plug your Bluetooth dongle into your computer and you should see something like the following on the console
ubt0 at uhub0 port 1 configuration 1 interface 0
ubt0: Broadcom BCM92035DGROM, rev 1.10/0.04, addr 2
Now, we need to establish a Bluetooth connection between your phone and computer. Enable the Bluetooth dongle
# /etc/rc.d/bluetooth start
Configuring Bluetooth controllers: ubt0.
starting Bluetooth Link Key/PIN Code manager
starting Bluetooth Service Discovery server
and perform an inquiry to discover the Bluetooth device address of your phone
# btconfig ubt0 inquiry
Device Discovery from device ubt0 ..... 1 response
1: bdaddr 00:22:b3:22:3e:32
: name "My Nokia"
...
Add an alias of the bdaddr (yours will be different) to /etc/bluetooth/hosts
# echo "00:22:b3:22:3e:32 phone" >> /etc/bluetooth/hosts
Next set up a PIN in order to pair the phone with your Bluetooth dongle
# btpin -a phone -r -l 6
PIN: 928434
and attempt to open a manual RFCOMM connection to the Dial Up Networking (DUN) service on the phone (press ^C to close the connection)
# rfcomm_sppd -a phone -s DUN
Starting on stdio...
AT
OK
ATI
Nokia
ATI3
Nokia 6230i
OK
^C
Your phone should prompt you for accepting the connection from your computer, accept it and enter the PIN that btpin generated to complete the pairing process.
Now we can start pppd
# pppd call gprs
Serial connection established.
Connect: ppp0 <--> /dev/ttyp9
local IP address 10.177.120.221
Remote IP address 10.4.4.4
Primary DNS address IP
Secondary DNS address IP
You are now online. To terminate your pppd session just press Control + C, or send a SIGHUP to the pppd process.
Advanced Configuration
You may find that some phones require authentication when connecting to PPP, this will be a username/password provided by your GSM Operator.
Create a /etc/ppp/chap-secrets file, owned by root and unreadable by anybody else (mode 0600) containing
#
# CHAP/PAP secrets file
#
"user" * "pass"
and add the following line to the /etc/ppp/peers/gprs file
user "user"
To automatically configure the DNS server when the PPP link is brought up, create a /etc/ppp/ip-up file containing
#!/bin/sh
#
# ip-up <interface> <tty> <speed> <local-ip> <remote-ip> <ipparam>
# $1 $2 $3 $4 $5 $6
#
if [ -f /etc/ppp/resolv.conf ]; then
rm -f /etc/resolv.conf
mv /etc/ppp/resolv.conf /etc/resolv.conf
fi
See Also
The Bluetooth section in the NetBSD Guide contains more general Bluetooth configuration, and details of all PPP options can be found in the pppd(8) manpage.
Setting up a DHCP Server for your home or company network is pretty simple with NetBSD. You don't need to install any software, because everything you need, is part of the base system. Just create the file /etc/dhcpd.conf like this:
deny unknown-clients;
ddns-update-style none;
subnet 192.168.0.0 netmask 255.255.255.0 {
range 192.168.0.200 192.168.0.254;
default-lease-time 28800;
max-lease-time 86400;
option broadcast-address 192.168.0.255;
option domain-name "mycompanydomainname.com";
option domain-name-servers 194.152.64.35, 194.25.2.132;
option routers 192.168.0.1;
host ftp-server {
hardware ethernet 00:00:0a:d8:39:ee;
fixed-address 192.168.0.199;
}
host sparc {
hardware ethernet 00:50:04:01:ee:20;
fixed-address 192.168.0.198;
}
}
Now add the service to your /etc/rc.conf. This way the DHCP will be started on every reboot.
dhcpd=yes
dhcpd_flags="-q ex0"
ex0 is the Network Interface to listen on for dhcp requests. This is important if you have more than one Network Interface. If you don't, you can omit the second line.
Before starting the service, you have to create a lease file, that the dhcp server needs.
# touch /var/db/dhcpd.leases
Now start the service:
# /etc/rc.d/dhcpd start
To test if your dhcp server is running, run dhcpcd on another host on the same network.
# dhcpcd
Also check your /var/db/dhcpd.leases file. That's it. Have fun.
Additional Information
Contents
Introduction
This wiki page examines the transformation of a very simple C program into a running ELF executable containing PowerPC machine code. A NetBSD system (NetBSD 4.0.1/macppc in these examples) and its toolchain (gcc and GNU binutils) perform several steps in this transformation.
- gcc translates our C code to assembly code.
- gcc calls GNU as to translate the assembly code to machine code in an ELF relocatable object.
- gcc calls GNU ld to link our relocatable object with the C runtime and the C library to form an ELF executable object.
- NetBSD kernel loads ld.elf_so, which loads our ELF executable and the C library (an ELF shared object) to run our program.
So far, this wiki page examines only the first two steps.
A very simple C program
This program is only one C file, which contains only one main function, which calls printf(3) to print a single message, then returns 0 as the exit status.
#include <stdio.h>
int
main(int argc, char *argv[])
{
printf("%s", "Greetings, Earth!\n");
return 0;
}
The C compiler gcc likes to use its knowledge of builtin functions to manipulate code. The version of gcc in NetBSD 4.0.1/macppc will simplify the printf statement to puts("Greeting, Earth!"); so the main function effectively calls puts(3) once and then returns 0.
We can apply gcc(1) in the usual way to compile this program. (With NetBSD, cc or gcc invokes the same command, so we use either name.) Then we can run our program:
$ cc -o greetings greetings.c $ ./greetings Greetings, Earth! $
We can apply gcc with the -v option to see some extra information. (Unlike most other commands, gcc does not allow combined options. Instead of gcc -vo, we must type gcc -v -o.) The gcc driver program actually runs three other commands. Here is the output from one run using my NetBSD 4.0.1 system. I have put the three commands in bold.
$ cc -v -o greetings greetings.c
Using built-in specs.
Target: powerpc--netbsd
Configured with: /usr/src/tools/gcc/../../gnu/dist/gcc4/configure --enable-long-
long --disable-multilib --enable-threads --disable-symvers --build=i386-unknown-
netbsdelf4.99.3 --host=powerpc--netbsd --target=powerpc--netbsd
Thread model: posix
gcc version 4.1.2 20061021 prerelease (NetBSD nb3 20061125)
**/usr/libexec/cc1 -quiet -v greetings.c -quiet -dumpbase greetings.c -auxbase gr**
**eetings -version -o /var/tmp//ccVB1DcZ.s**
#include "..." search starts here:
#include <...> search starts here:
/usr/include
End of search list.
GNU C version 4.1.2 20061021 prerelease (NetBSD nb3 20061125) (powerpc--netbsd)
compiled by GNU C version 4.1.2 20061021 (prerelease) (NetBSD nb3 200611
25).
GGC heuristics: --param ggc-min-expand=38 --param ggc-min-heapsize=77491
Compiler executable checksum: 325f59dbd937debe20281bd6a60a4aef
**as -mppc -many -V -Qy -o /var/tmp//ccMiXutV.o /var/tmp//ccVB1DcZ.s**
GNU assembler version 2.16.1 (powerpc--netbsd) using BFD version 2.16.1
**ld --eh-frame-hdr -dc -dp -e _start -dynamic-linker /usr/libexec/ld.elf_so -o g**
**reetings /usr/lib/crt0.o /usr/lib/crti.o /usr/lib/crtbegin.o /var/tmp//ccMiXutV.**
**o -lgcc -lgcc_eh -lc -lgcc -lgcc_eh /usr/lib/crtend.o /usr/lib/crtn.o**
The first command, /usr/libexec/cc1, is internal to gcc and is not for our direct use. The other two commands, as and ld, are external to gcc. We would use as and ld without gcc, if we would want so.
The first command, /usr/libexec/cc1, is the C compiler proper; it compiles C code and outputs assembly code, in a file with the .s suffix. In the above example, it created the assembly version of our main function. The second command, as, assembles the .s file to machine code, in a relocatable object file with the .o suffix. It created the machine code version of our main function. The third command, ld, links object files into an executable file. It combined our main function with the C runtime and the C library to create our program of greetings.
The .s assembly file and the .o object file were temporary files, so the gcc driver program deleted them. We only keep the final executable of greetings.
The program in PowerPC assembly language
The manual page for gcc(1) explains that we can use the -S option to stop gcc with the assembly code. For PowerPC targets, gcc outputs register numbers by default; the -mregnames option tells gcc to output register names instead. If you are learning assembly language, then cc -mregnames -S is a good way to produce examples of assembly code.
The command cc -mregnames -S greetings.c produces the output file greetings.s which contains the assembly version of our main function. (If you want greeting.s to contain PowerPC assembly code, then you need to use compiler that targets PowerPC.) The assembly syntax allows for comments, assembler directives, instructions and labels.
- Comments begin with a '#' sign, though gcc never puts any comments in its generated code. PowerPC uses '#', unlike many other architectures that use ';' instead.
- Assembler directives have names that begin with a dot (like .section or .string) and may take arguments.
- Instructions have mnemonics without a dot (like li or stw) and may take operands.
- Labels end with a colon (like .LC0: or main:) and save the current address into a symbol.
The PowerPC processor executes instructions. Most PowerPC instructions operate on the registers inside the processor. There are other instructions that load registers from memory or store registers to memory. Each of the general purpose registers (named r0 through r31) and the link register (named lr) hold a 32-bit integer.
Assembly code may contain the register numbers (0 through 31) or the register names. Register numbers become confusing, when a 3 in the code might refer to the general purpose register r3, the floating point register f3, or the immediate value 3. The cc -mregnames flag uses the assembly syntax for register names, which is to put a '%' sign before each name, as in %r3 or %f3. This is necessary to distinguish register %r3 from a symbol named r3.
Commented copy of greetings.s
Here is a copy of greeting.s (from the gcc of NetBSD 4.0.1/macppc) with added comments. Each instruction has a comment in pseudo-C to show the effect, if you know C language. Pretend that the registers are (char ) for indexing, but int or (int ) for assignment.
# This is a commented version of greeting.s, the 32-bit PowerPC
# assembly code output from cc -mregnames -S greetings.c
# .file takes the name of the original source file,
# because this was a generated file. I guess that this
# allows error messages or debuggers to blame the
# original source file.
.file "greetings.c"
# Enter the .rodata section for read-only data. String constants
# belong in this section.
.section .rodata
# For PowerPC, .align takes an exponent of 2.
# So .align 2 gives an alignment of 4 bytes, so that
# the current address is a multiple of 4.
.align 2
# .string inserts a C string, and the assembler provides
# the terminating \0 byte. The label sets the symbol
# .LC0 to the address of the string.
.LC0:
.string "Greetings, Earth!"
# Enter the .text section for program text, which is the
# executable part.
.section ".text"
# We need an alignment of 4 bytes for the following
# PowerPC processor instructions.
.align 2
# We need to export main as a global symbol so that the
# linker will see it. ELF wants to know that main is a
# @function symbol, not an @object symbol.
.globl main
.type main, @function
main:
# The code for the main function begins here.
# Passed in general purpose registers:
# r1 = stack pointer, r3 = argc, r4 = argv
# Passed in link register:
# lr = return address
# The int return value goes in r3.
# Allocate 32 bytes for our the stack frame. Use the
# atomic instruction "store word with update" (stwu) so
# that r1[0] always points to the previous stack frame.
stwu %r1,-32(%r1) # r1[-32] = r1; r1 -= 32
# Save registers r31 and lr to the stack. We need to
# save r31 because it is a nonvolatile register, and to
# save lr before any function calls. Now r31 belongs in
# the register save area at the top of our stack frame,
# but lr belongs in the previous stack frame, in the
# lr save word at (r1[0])[0] == r1[36].
mflr %r0 # r0 = lr
stw %r31,28(%r1) # r1[28] = r31
stw %r0,36(%r1) # r1[36] = r0
# Save argc, argv to the stack.
mr %r31,%r1 # r31 = r1
stw %r3,8(%r31) # r31[8] = r3 /* argc */
stw %r4,12(%r31) # r31[12] = r4 /* argv */
# Call puts(.LC0). First we need to load r3 = .LC0, but
# each instruction can load only 16 bits.
# .LC0@ha = (.LC0 >> 16) & 0xff
# .LC0@l = .LC0 & 0xff
# This method uses "load immediate shifted" (lis) to
# load r9 = (.LC0@ha << 16), then "load address" (la) to
# load r3 = &(r9[.LC0@l]), same as r3 = (r9 + .LC0@l).
lis %r9,.LC0@ha
la %r3,.LC0@l(%r9) # r3 = .LC0
# The "bl" instruction calls a function; it also sets
# the link register (lr) to the address of the next
# instruction after "bl" so that puts can return here.
bl puts # puts(r3)
# Load r3 = 0 so that main returns 0.
li %r0,0 # r0 = 0
mr %r3,%r0 # r3 = r0
# Point r11 to the previous stack frame.
lwz %r11,0(%r1) # r11 = r1[0]
# Restore lr from r11[4]. Restore r31 from r11[-4],
# same as r1[28].
lwz %r0,4(%r11) # r0 = r11[4]
mtlr %r0 # lr = r0
lwz %r31,-4(%r11) # r31 = r11[-4]
# Free the stack frame, then return.
mr %r1,%r11 # r1 = r11
blr # return r3
# End of main function.
# ELF wants to know the size of the function. The dot
# symbol is the current address, now the end of the
# function, and the "main" symbol is the start, so we
# set the size to dot minus main.
.size main, .-main
# This is the tag of the gcc from NetBSD 4.0.1; the
# assembler will put this string in the object file.
.ident "GCC: (GNU) 4.1.2 20061021 prerelease (NetBSD nb3 20061125)"
The above code is not a complete, standalone assembly program! It only contains a main function, for linking with the C runtime and the C library. It obeys the ELF and PowerPC conventions for the use of registers. (These conventions require the code to save r31 but not r9.) The bl puts instruction is our evidence that the program calls puts(3) instead of printf(3).
The compiler did not optimize the above code. Some optimizations might be obvious! Consider the code that saves argc and argv to the stack. We would can use r1 instead of copying r1 to r11. Going further, we would can delete the code and never save argc and argv, because this main function never uses argc and argv!
Optimizing the main function
Expect a compiler like gcc to write better assembly code than a human programmer who knows assembly language. The best way to optimize the assembly code is to enable some gcc optimization flags.
Released software often uses the -O2 flag, so here is a commented copy of greetings.s (from the gcc of NetBSD 4.0.1/macppc) with -O2 in use.
# This is a commented version of the optimized assembly output
# from cc -O2 -mregnames -S greetings.c
.file "greetings.c"
# Our string constant is now in a section that would allow an
# ELF linker to remove duplicate strings. See the "info as"
# documentation for the .section directive.
.section .rodata.str1.4,"aMS",@progbits,1
.align 2
.LC0:
.string "Greetings, Earth!"
# Enter the .text section and declare main, as before.
.section ".text"
.align 2
.globl main
.type main, @function
main:
# We use registers as before:
# r1 = stack pointer, r3 = argc, r4 = argv,
# lr = return address, r3 = int return value
# Set r0 = lr so that we can save lr later.
mflr %r0 # r0 = lr
# Allocate only 16 bytes for our stack frame, and
# point r1[0] to the previous stack frame.
stwu %r1,-16(%r1) # r1[-16] = r1; r1 -= 16
# Save lr in the lr save word at (r1[0])[0] == r1[20],
# before calling puts(.LC0).
lis %r3,.LC0@ha
la %r3,.LC0@l(%r3) # r3 = .LC0
stw %r0,20(%r1) # r1[20] = r0
bl puts # puts(r3)
# Restore lr, free stack frame, and return 0.
lwz %r0,20(%r1) # r0 = r1[20]
li %r3,0 # r3 = 0
addi %r1,%r1,16 # r1 = r1 + 16
mtlr %r0 # lr = r0
blr # return r3
# This main function is smaller than before but ELF
# wants to know the size.
.size main, .-main
.ident "GCC: (GNU) 4.1.2 20061021 prerelease (NetBSD nb3 20061125)"
The optimized version of the main function does not use the r9, r11 or r31 registers; and it does not save r31, argc or argv to the stack. The stack frame occupies only 16 bytes, not 32 bytes.
The main function barely uses the stack frame, only writing the frame pointer to 0(r1) and never reading anything. The main function must reserve 4 bytes of space at 4(r1) for an lr save word, in case the puts function saves its link register. The frame pointer and lr save word together occupy 8 bytes of stack space. The main function allocates 16 bytes, instead of only 8 bytes, because of a convention that the stack pointer is a multiple of 16.
The relocatable object file
Now that we have the assembly code, there are two more steps before we have the final executable.
- The first step is to run the assembler (as), which translates the assembly code to machine code, and stores the machine code in an ELF relocatable object.
- The second step is to run the linker (ld), which combines some ELF relocatables into one ELF executable.
There are various tools that can examine ELF files. The command nm(1) lists the global symbols in an object file. The commands objdump(1) and readelf(1) show other information. These commands can examine both relocatables and executables. Though the executable is more interesting, the relocatable is simpler.
To continue our example, we can run the assembler with greetings.s to produce greetings.o. We use the optimized code in greetings.s from cc -O2 -mregnames -S greetings.c, because it was shorter. We feed our file greeting.s to /usr/bin/as with a simple command.
$ as -o greetings.o greetings.s
The output greetings.o is a relocatable object file, and file(1) confirms this.
$ file greetings.o greetings.o: ELF 32-bit MSB relocatable, PowerPC or cisco 4500, version 1 (SYSV) , not stripped
List of sections
The source greetings.s had assembler directives for two sections (.rodata.str1.4 and .text), so the ELF relocatable greetings.o should contain those two sections. The command objdump can list the sections.
$ objdump
Usage: objdump <option(s)> <file(s)>
Display information from object <file(s)>.
At least one of the following switches must be given:
...
-h, --[section-]headers Display the contents of the section headers
...
$ objdump -h greetings.o
greetings.o: file format elf32-powerpc
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 0000002c 00000000 00000000 00000034 2**2
CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
1 .data 00000000 00000000 00000000 00000060 2**0
CONTENTS, ALLOC, LOAD, DATA
2 .bss 00000000 00000000 00000000 00000060 2**0
ALLOC
3 .rodata.str1.4 00000014 00000000 00000000 00000060 2**2
CONTENTS, ALLOC, LOAD, READONLY, DATA
4 .comment 0000003c 00000000 00000000 00000074 2**0
CONTENTS, READONLY
This command verifies the presence of the .text and .rodata.str1.4 sections. The .text section begins at file offset 0x34 and has size 0x2c, in bytes. The .rodata.str1.4 section begins at file offset 0x60 and has size 0x14.
Because the source greetings.s does not have assembler directives for the .data or .bss or .comment section, there must be another explanation for those three sections. The .data and .bss section has size 0x0. Perhaps for traditional reasons, the assembler puts these sections into every object file. Because the source greeting.s never mentioned the .data or .bss section, nor allocated space in them, so the assembler output them as empty sections. (The a.out(5) format always had text, data and bss segments. The elf(5) format distinguishes segments and sections, and also allows for arbitrary sections like .rodata.str1.4 and .comment.)
That leaves the mystery of the .comment section. The objdump command accepts -j to select a section and -s to show the contents, so objdump -j .comment -s greetings.o dumps the 0x3c bytes in that section.
$ objdump -j .comment -s greetings.o
greetings.o: file format elf32-powerpc
Contents of section .comment:
0000 00474343 3a202847 4e552920 342e312e .GCC: (GNU) 4.1.
0010 32203230 30363130 32312070 72657265 2 20061021 prere
0020 6c656173 6520284e 65744253 44206e62 lease (NetBSD nb
0030 33203230 30363131 32352900 3 20061125).
This is just the string fromm the .ident assembler directive, between a pair of \0 bytes. So whenever gcc generates an .ident directive, the assembler leaves this .comment section to identify the compiler that produced this relocatable. (The "info as" documentation for the .ident directive, shipped with NetBSD 4.0.1, continues to claim that the assembler "does not actually emit anything for it", but in fact the assembler emits this .comment section.)
The objdump tool also has a disassembler through its -d option. Disassembly is the reverse process of assembly; it translates machine code to assembly code. We would can disassemble greetings.o but the output would have a few defects, because of symbols that lack their final values.
Of symbols and addresses
Our assembly code in greetings.s had three symbols. The first symbol had the name .LC0 and pointed to our string.
.LC0:
.string "Greetings, Earth!"
The second symbol had the name main. It was a global symbol that pointed to a function.
.globl main
.type main, @function
main:
mflr %r0
...
The third symbol had the name puts. Our code used puts in a function call, though it never defined the symbol.
bl puts
A symbol has a name and an integer value. In assembly code, a symbol acts as a constant inline integer. The very most common use of a symbol is to hold an address, pointing to either a function or a datum. When a symbol appears as an operand to an instruction, the assembler would inline the value of that symbol into the machine code. The problem is that the assembler often does not know the final value of the symbol. So the assembler as saves some information about symbols into the ELF file. The linker ld can use this information to relocate symbols to their final values, resolve undefined symbols and inline the final values into the machine code. The fact that ld relocates symbols is also the reason that .o files are relocatable objects.
The nm command shows the names of symbols in an object file. The output of nm shows that greetings.o contains only two symbols. The .LC0 symbol is missing.
$ nm greetings.o
00000000 T main
U puts
The nm tool comes from Unix tradition, and remains a great way to check the list of symbols. For each symbol, nm displays the hexadecimal value, a single letter for the type, then the name. The letter 'T' marks symbols that point into a text section, either the .text section or some other arbitrary section of executable code. The letter 'U' marks undefined symbols, which do not have a value.
The nm tool claims that symbol main has address 0x00000000, which to be a useless value. The actual meaning is that main points to offset 0x0 within section .text. A more detailed view of the symbol table would provide evidence of this.
Fate of symbols
The machine code in greetings.o is incomplete. If the address of the string "Greetings, Earth!" is not zero, then something must fix the instructions at 0x8 and 0xc. To avoid an infinite loop, something must fix the instruction at 0x14 to find the function puts. The linker will have the task to edit and finish the machine code.
(Because this part of the wiki page now comes before the part about machine code, this disassembly should probably not be here.)
00000000 <main>:
0: (31|00000|01000|02a6) mflr r0
4: (37|00001|00001|fff0) stwu r1,-16(r1)
8: (15|00011|00000|0000) lis r3,0
c: (14|00011|00011|0000) addi r3,r3,0
10: (36|00000|00001|0014) stw r0,20(r1)
14: (18|00000|00000|0001) bl 14 <main+0x14>
18: (32|00000|00001|0014) lwz r0,20(r1)
1c: (14|00011|00000|0000) li r3,0
20: (14|00001|00001|0010) addi r1,r1,16
24: (31|00000|01000|03a6) mtlr r0
28: (19|10100|00000|0020) blr
The above disassembly does not tell that the code at 0x8, 0xc and 0x14 is incomplete (though the infinite loop at 0x14 is a hint). There must be another way to find where the above code uses a symbol that lacks a final value. The ELF relocatable greetings.o bears information about both relocating symbols and resolving undefined symbols; some uses of objdump or readelf can reveal this information.
ELF, like any object format, allows for a symbol table. The list of symbols from nm greetings.o is only an incomplete view of this table.
$ nm greetings.o
00000000 T main
U puts
The command objdump -t shows the symbol table in more detail.
$ objdump -t greetings.o
greetings.o: file format elf32-powerpc
SYMBOL TABLE:
00000000 l df *ABS* 00000000 greetings.c
00000000 l d .text 00000000 .text
00000000 l d .data 00000000 .data
00000000 l d .bss 00000000 .bss
00000000 l d .rodata.str1.4 00000000 .rodata.str1.4
00000000 l d .comment 00000000 .comment
00000000 g F .text 0000002c main
00000000 *UND* 00000000 puts
The first column is the value of the symbol in hexadecimal. By some coincidence, every symbol in greetings.o has the value zero. The second column gives letter 'l' for a local symbol or letter 'g' for a global symbol. The third column gives the type of symbol, with 'd' for a debugging symbol, lowercase 'f' for a filename or uppercase 'F' for a function symbol. The fourth column gives the section of the symbol, or 'ABS' for an absolute symbol, or 'UND' for an undefined symbol. The fifth column gives the size of the symbol in hexadecimal. The sixth column gives the name of the symbol.
The filename symbol greetings.c exists because the assembly code greetings.s had a directive .file greetings.c. The symbol main has a nonzero size because of the .size directive.
Each section of this ELF relocatable has a symbol that points to address 0x0 in the section. Then every section of this relocatable must contain address 0x0. A view of the section headers in greetings.o confirms that every section begins at address 0x0.
$ objdump -h greetings.o
greetings.o: file format elf32-powerpc
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 0000002c 00000000 00000000 00000034 2**2
CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
1 .data 00000000 00000000 00000000 00000060 2**0
CONTENTS, ALLOC, LOAD, DATA
2 .bss 00000000 00000000 00000000 00000060 2**0
ALLOC
3 .rodata.str1.4 00000014 00000000 00000000 00000060 2**2
CONTENTS, ALLOC, LOAD, READONLY, DATA
4 .comment 0000003c 00000000 00000000 00000074 2**0
CONTENTS, READONLY
The output of objdump -h shows the address of each section in the VMA and LMA columns. (ELF files always use the same address for both VMA and LMA.) All five sections in greetings.o begin at address 0x0. Thus, all four sections marked ALLOC would overlap in memory. The linker will must relocate these four sections so that they do not overlap.
The value of each symbol is the address. The section of each symbol serves to disambiguate addresses where sections overlap. Thus symbol main points to address 0x0 in the section .text, not any other section. Because every section begins at address 0x0, each address is relative to the beginning of the section. Therefore, symbol main points to offset 0x0 into the section .text.
TODO: explain "relocation records"
$ objdump -r greetings.o
greetings.o: file format elf32-powerpc
RELOCATION RECORDS FOR [.text]:
OFFSET TYPE VALUE
0000000a R_PPC_ADDR16_HA .rodata.str1.4
0000000e R_PPC_ADDR16_LO .rodata.str1.4
00000014 R_PPC_REL24 puts
Disassembly and machine code
Disassembly
GNU binutils provide both assembly and the reverse process, disassembly. While as does assembly, objdump -d does disassembly. Both programs use the same library of opcodes.
By default, objdump -d disassembles all executable sections. (The -j option can select a section to disassemble. Our example relocatable greetings.o has only executable section, so -j .text becomes optional.) The disassembler works better with linked executable files. It can also disassemble relocatables like greetings.. The output will confuse the reader because of undefined symbols, and symbols not relocated to their final values.
$ objdump -d greetings.o
greetings.o: file format elf32-powerpc
Disassembly of section .text:
00000000 <main>:
0: 7c 08 02 a6 mflr r0
4: 94 21 ff f0 stwu r1,-16(r1)
8: 3c 60 00 00 lis r3,0
c: 38 63 00 00 addi r3,r3,0
10: 90 01 00 14 stw r0,20(r1)
14: 48 00 00 01 bl 14 <main+0x14>
18: 80 01 00 14 lwz r0,20(r1)
1c: 38 60 00 00 li r3,0
20: 38 21 00 10 addi r1,r1,16
24: 7c 08 03 a6 mtlr r0
28: 4e 80 00 20 blr
The disassembled code has a slightly different syntax. Every instruction has a label, and each label is the hexadecimal address. The hex after each label is the machine code for that instruction. The syntax is more ambiguous, so register names do not begin with a '%' sign, "r3" can be a register instead of a symbol, and "14" can be a label instead of an immediate value.
The size of every PowerPC instruction is four bytes. PowerPC architecture requires this fixed width of four bytes or 32 bits for every instruction. It also requires an alignment of four bytes for every instruction, meaning that the address of every instruction is a multiple of four. The above code meets both requirements.
The disassembled code would must resemble the assembly code in greetings.s. A comparison shows that every instruction is the same, except for three instructions.
- Address 0x8 has lis r3,0 instead of lis %r3,.LC0@ha.
- Address 0xc has addi r3,r3,0 instead of la %r3,.LC0@l(%r3).
- Address 0x14 has bl 14 instead of bl puts.
The fate of the symbols .LC0 and puts would explain these differences. The linker will inline the correct values of .LC0 and puts, so that these three instructions have sense. Because we have the source file greeting.s, we know about the .LC0 and puts symbols.
If the reader of objdump -d greetings.o would not know about these symbols, then the three instructions at 0x8, 0xc and 0x14 would seem strange, useless and wrong.
- The "load immediate shifted" (lis) instruction shifts the immediate value leftward by 16 bits, then loads the register with the shifted value. So lis r3,0 shifts zero leftward by 16 bits, but the shifted value remains zero. The unnecessary shift seems strange, but it is not a problem.
- The "add immediate" (addi) instruction does addition, so addi r3,r3,0 increments r3 by zero, which effectively does nothing! The instruction seems unnecessary and useless.
- The instruction at address 0x14 is bl 14 , which branches to label 14, effectively forming an infinite loop because it branches to itself! Something is wrong.
If the reader understands that the infinite loop is actually a branch to function puts, then the function call still seems wrong, because puts uses the argument in r3, and the code loads r3 with zero. Therefore the function call might be puts(NULL) or puts(&main). Zero is the null pointer, and also the address of the main function, but function puts wants the address of some string. Before the linker relocates some things away from zero, now zero has two or three redundant meanings. The reader cannot follow the code.
In the build of a large C program, there are many .c and .o files but no .s files. (NetBSD and pkgsrc both provide many examples of this.) If someone did objdump -d to read a .o file, then the reader be unable to follow and understand the disassembly, because of undefined symbols, and symbols not relocated to their final values, and overlapping addresses with redundant meanings.
A better understanding of how symbols fit into machine code would help.
Machine code in parts
The output of objdump -d has the machine code in hexadecimal. This allows the reader to identify individual bytes. This is good with architectures that organize opcodes and operands into bytes.
PowerPC packs the opcodes and operands more tightly into bits. Each instruction has 32 bits. The first 6 bits (at the big end) have the opcode. In a typical instruction, the next 5 bits pick the first register, the next 5 bits pick the second register, and the remaining 16 bits hold an immediate value. A filter program that takes the hexadecimal machine code from objdump -d and splits each instruction into these four parts would be helpful.
One can write the filter program using a scripting language that provides both regular expressions and bit-shifting operations. Perl (available in lang/perl5) is such a language. Here follows machine.pl, such a script.
#!/usr/bin/env perl
# usage: objdump -d ... | perl machine.pl
#
# The output of objdump -d shows the machine code in hexadecimal. This
# script converts the machine code to a format that shows the parts of a
# typical PowerPC instruction such as "addi".
#
# The format is (opcode|register-1|register-2|immediate-value),
# with digits in (decimal|binary|binary|hexadecimal).
use strict;
use warnings;
my $byte = "[0-9a-f][0-9a-f]";
my $word = "$byte $byte $byte $byte";
while (defined(my $line = <ARGV>)) {
chomp $line;
if ($line =~ m/^([^:]*:\s*)($word)(.*)$/) {
my ($before, $code, $after) = ($1, $2, $3);
$code =~ s/ //g;
$code = hex($code);
my $opcode = $code >> (32-6); # first 6 bits
my $reg1 = ($code >> (32-11)) & 0x1f; # next 5 bits
my $reg2 = ($code >> (32-16)) & 0x1f; # next 5 bits
my $imm = $code & 0xffff; # last 16 bits
$line = sprintf("%s(%2d|%05b|%05b|%04x)%s",
$before, $opcode, $reg1, $reg2, $imm,
$after);
}
print "$line\n";
}
Here follows the disassembly of greetings.o, with the machine code in parts.
$ objdump -d greetings.o | perl machine.pl
greetings.o: file format elf32-powerpc
Disassembly of section .text:
00000000 <main>:
0: (31|00000|01000|02a6) mflr r0
4: (37|00001|00001|fff0) stwu r1,-16(r1)
8: (15|00011|00000|0000) lis r3,0
c: (14|00011|00011|0000) addi r3,r3,0
10: (36|00000|00001|0014) stw r0,20(r1)
14: (18|00000|00000|0001) bl 14 <main+0x14>
18: (32|00000|00001|0014) lwz r0,20(r1)
1c: (14|00011|00000|0000) li r3,0
20: (14|00001|00001|0010) addi r1,r1,16
24: (31|00000|01000|03a6) mtlr r0
28: (19|10100|00000|0020) blr
The disassembly now shows the machine code with the opcode in decimal, then the next 5 bits in binary, then another 5 bits in binary, then the remaining 16 bits in hexadecimal.
The "load word and zero" (lwz) instruction given lwz X,N(Y) would load register X with a value from memory. It indexes memory using register Y as a pointer and value N as an offset in bytes. Thus the memory location is N bytes after where register Y points. The mnemonic lwz uses opcode 32. The next 5 bits hold the register number for X. Another 5 bits hold the register number for Y. The remaining 16 bits hold the offset value N. Given lwz r0,20(r1) then r0 is 00000 in binary, r1 is 00001 in binary, 20 is 0x14 in hexadecimal, so the filter script would write (32|00000|00001|0014).
It becomes apparent that "store word" (stw) uses opcode 36, while "store word with update" (stwu) uses opcode 37. Given stw r0,20(r1) then the filter script would write (36|00000|00001|0014). Given stwu r1,-16(r1) then -16 is 0xfff0 in hexadecimal 2s complement, so the filter script would write (37|00001|00001|fff0).
The "add immediate" (addi) instruction given addi X,Y,Z would load register X with the sum of register Y and immediate value Z. The mnemonic addi uses opcode 14. The next 5 bits hold the register number for X. Another 5 bits hold the register number for Y. The remaining 16 bits hold the immediate value Z. Given addi r3,r3,0 then r3 is 00011 in binary, so the filter script would write (14|00011|00011|0000). Given addi r1,r1,16, then r1 is 00001 in binary, 16 is 0x10 in hexadecimal, so the filter script would write (14|00001|00001|0010).
The addi instruction has one more quirk. The second operand Y is either the immediate value 0, or a register number 1 through 31. (Some other instructions have this same quirk and cannot read register 0.) Thus addi 4,0,5 would actually do r4 = 0 + 5 instead of r4 = r0 + 5, as if register 0 would contain value 0 instead of the value in register 0. This quirk allows the "load immediate" (li) mnemonic, which also uses opcode 14, to load an immediate value by adding it to zero. So li r3,0 is the same as addi 3,0,0 which becomes (14|00011|00000|0000).
The "load address" (la) instruction given la X,N(Y) would load register X with the address of N bytes after where register Y points. This is the same as to add register Y to immediate value N, so la X,N(Y) and addi X,Y,N are the same, and both use opcode 14.
When machine code contains opcode 14, then the disassembler tries to be smart about choosing an instruction mnemonic. Here follows a quick example.
$ cat quick-example.s
.section .text
addi 4,0,5 # bad
la 3,3(0) # very bad
la 3,0(3)
la 5,2500(3)
$ as -o quick-example.o quick-example.s
$ objdump -d quick-example.o | perl machine.pl
quick-example.o: file format elf32-powerpc
Disassembly of section .text:
00000000 <.text>:
0: (14|00100|00000|0005) li r4,5
4: (14|00011|00000|0003) li r3,3
8: (14|00011|00011|0000) addi r3,r3,0
c: (14|00101|00011|09c4) addi r5,r3,2500
If the second register operand to opcode 14 is 00000, then the machine code looks like an instruction "li", so the disassembler uses the mnemonic "li". Otherwise the disassembler prefers mnemonic "addi" to "la".
Opcodes more strange
The filter script shows the four parts of a typical instruction, but not all instructions have those four parts. The instructions that do branching or access special registers are not typical instructions.
Here again is the disassembly of the main function in greetings.o:
00000000 <main>:
0: (31|00000|01000|02a6) mflr r0
4: (37|00001|00001|fff0) stwu r1,-16(r1)
8: (15|00011|00000|0000) lis r3,0
c: (14|00011|00011|0000) addi r3,r3,0
10: (36|00000|00001|0014) stw r0,20(r1)
14: (18|00000|00000|0001) bl 14 <main+0x14>
18: (32|00000|00001|0014) lwz r0,20(r1)
1c: (14|00011|00000|0000) li r3,0
20: (14|00001|00001|0010) addi r1,r1,16
24: (31|00000|01000|03a6) mtlr r0
28: (19|10100|00000|0020) blr
Assembly code uses "branch and link" (bl) to call functions and "branch to link register" (blr) to return from functions.
- The instruction bl branches to the address of a function, and stores the return address in the link register.
- The instruction blr branches to the address in the link register.
- The instructions "move from link register" (mflr) and "move to link register" (mtlr) access the link register, so that a function may save its return address while it uses bl to call other functions.
Every processor has a program counter (ctr) to hold the address of the current instruction. The branch instructions change the program counter. PowerPC uses a link register (lr) instead of a general purpose register (any of r0 through r31) or a memory location to hold the return address, seemingly to separate the branch processing unit (bpu) from the units that access general purpose registers or memory.
The instruction bl takes one operand, an immediate value for the address. There is no way to fit an opcode of 6 bits and an address of 32 bits into an instruction of only 32 bits. So bl has only 26 bits for an operand. Thus bl actually takes a 26-bit relative address and increments the program counter. This provides a range of about 32 MB in either direction. The assembler converts the operand to a relative address when it assembles the instruction bl.
The instruction bl puts would cause the assembler to convert the value of symbol puts to a relative address; but puts was an undefined symbol, so the assembler output a meaningless relative address of zero. An instruction bl with relative address zero would branch to itself for an infinite loop. (The address for PowerPC is relative to the branch instruction. The address for some other architectures would be relative to the next instruction after the branch.)
The address of every PowerPC instruction is a multiple of 4, thus the relative branch can also ignore the low 2 bits of the 26-bit relative address. Thus PowerPC uses those 2 bits to select the type of branch. Instruction bl uses opcode 18 and sets the lower 2 bits to 0x1. Mnemonic bl shares opcode 18 with three other mnemonics that set the lower 2 bits in other way. Given bl puts the assembler began with opcode 18, ended with 0x1, and filled the intermediate 24 bits with zeros, so the filter script would write (18|00000|00000|0001). A better filter script would write opcode 18 in decimal, the 26-bit relative address in hexadecimal, and the low 2 bits in binary.
Opcode 19 is for many types of branches; the lower 26 bits somehow specify the type of branch. Mnemonic blr shares opcode 19 with many other mnemonics. Opcode 31 is for operations with special purpose registers; the lower 26 bits somehow pick a special register and an action. Mnemonics mflr and mtlr share opcode 31 with many other mnemonics, including the more general "move from special purpose register" (mfspr) and "move to special purpose register" (mtspr). The instructions blr', mflr and mtlr do not involve any symbols, so the lower 26 bits already have their final values.
The source file /usr/src/gnu/dist/binutils/opcodes/ppc-opc.c contains a table of powerpc_opcodes that lists the various mnemonics that use opcodes 18, 19 and 31.
This article aims to show configuration examples for common configurations. Configuration examples for ?login.conf, ?sysctl.conf or specific parameters required to ?newfs for particular setups belong here, but the article should not become a configuration files gallery for every setup. It also does not aim to explain every detail of the configuration. Links should be provided to the relevant detailed documentation.
For performance-oriented configuration details, also see ?Tuning NetBSD for performance.
This article is a work in progress.
Desktop PC
Generally, desktop systems run applications which heavily require executable and stack pages. Part of the file buffer cache may be sacrificed in most cases to permit the system to keep more executable pages in live memory.
sysctl.conf:
vm.execmin=14
vm.filemin=1
vm.execmax=70
vm.filemax=10
kern.maxvnodes=32768
login.conf:
default|:\
:datasize=256M:\
:memoryuse=256M:\
:stacksize=64M:\
:maxproc=2048:\
:openfiles=2048:\
:priority=-1:
kernel configuration:
options SHMMAXPGS=32768 # 2048 pages is the default
Database server
PostgreSQL
PostgreSQL recommends the following in its documentation:
options SYSVSHM
options SHMMAXPGS=4096
options SHMSEG=256
options SYSVSEM
options SEMMNI=256
options SEMMNS=512
options SEMMNU=256
options SEMMAP=256
Also recommends to enable kern.ipc.shm_use_phys.
Contents
Introduction
This is a guide to Unix programming. It is intended to be as generic as possible, since we want to create programs that are as portable as NetBSD itself.
Compiling a C program
I will assume you know C and how to edit a file under Unix.
First, type in the code to your file, let's say we put it in 'hello.c'. Now, to create a program from it, do the following:
$ gcc hello.c -o hello
<compiler output>
If gcc (GNU Compiler Collection) is not available, use the command 'cc' instead of 'gcc'. Command line switches can be different, though. If you use C++, everything said here is the same, but you may replace the 'gcc' by 'g++'.
Now if there were no compiler errors, you can find the file 'hello' in the current working directory. To execute this program, type
$ ./hello
<Your program's output>
The reason you can't just type 'hello' is a good one but I won't explain it here. Beware, with some shells, the current line is overwritten when the program quits, so be sure to include at least a newline (\n) in the last line you print to the screen. This can be very frustrating if the program seems to give no output.
To compile larger programs, you can do this:
$ gcc -c file1.c
<compiler output>
And the same for file2.c etc. The -c switch tells gcc to only compile the code, and not link it into a complete program. This will result in a file with the same name as the source file, except the '.c' is replaced by '.o'.
Now, once you have compiled all your .c modules, you can tie the resulting object files (those with a .o extension) together with the following:
$ gcc file1.o file2.o file3.o -o program
<linker output>
Gcc now will link the object files together into your program. If all went good, you can run the program again with './program'.
If you forget the -o switch on linking, the resulting default program will be called 'a.out' and will be placed in the current working directory.
If an include file can't be found, you can use the -I switch to point gcc to the correct path. Example:
$ gcc -c -I/usr/local/include blah.c
This will compile blah.c into blah.o, and when it comes across a #include, it will look in /usr/local/include for that file if it can't find it in the default system include directory (or the current directory).
When you start using libraries (like libX11), you can use the -l, -L and -R flags when linking. This works as follows:
$ gcc -lX11 blah.o -o program
This will try to link the program to the file 'libX11.so', or 'libX11.a' depending on if your system is dynamically or statically linked.
Usually, the linker can't find libraries, so always use the following command:
$ gcc -L/usr/X11R6 -R/usr/X11R6 -lX11 blah.o -o program
The -L flag tells the linker where to find the 'libX11.so' or 'libX11.a' file. The -R flag is for 'tying in' this path into the program. The reason this needs to be done next to the -L, is that when the system is dynamically linked, the library will be accessed on demand. On statically linked systems, the 'libX11.a' file's contents will be copied into the final binary, which will make it a lot bigger. Nowadays, almost all systems are dynamically linked, so the 'libX11.so' file is used. But the contents are not copied over. The library itself will be accessed whenever the program is started.
To make sure the dynamical linker can find the library, you need to tie in the library's path. Some systems (notably, most Linuxen) function even when you don't tie in the path yourself, because it 'knows' about some common paths, but some other, more correct systems do not 'know' any of the common paths. This seems to be for security reasons, but I don't know the exact details of why this is insecure (probably has to do with chrooted environments). It never hurts to use '-R', so please do so in every project you do, since you might some day want to run it on another system.
Using Make
It can be tedious work to type in those long command lines all the time, so you probably want to use Make to automate this task.
A Makefile generally looks like the following:
# This is a comment
program: hello.o
gcc hello.o -o program
hello.o: hello.c
gcc -c hello.c
Just save the things between the lines as the file called 'Makefile' and type 'make'. Make will try to build the first 'target' (a word followed by a colon) it encounters, which is in our case 'program'. It looks at what is required before it can do so, which is hello.o (after the colon). Warning: The whitespace in front of the commands below the target must be exactly one tab, or it will not work. This is a deeply propagated bug in all versions of Make. (don't let anyone convince you it's a feature. It's not)
So if 'hello.o' does not exist, before Make can build 'program', it must first build 'hello.o'. This can be done from 'hello.c', as it can see in the next target. If hello.o already existed, it will look if hello.c is newer (in date/time) than hello.o. If this is not the case, it will not have to rebuild hello.o, as obviously(?) the file hasn't been modified. If the file is newer, it will have to rebuild hello.o from hello.c.
When Make has determined that it must build a target, it will look at the indented lines (with a tab) directly following the target line. These are the command it will execute in sequence, expecting the target file to be a result of executing these commands.
If you wish to build a particular file only, just type 'make ', where target is the name of the target you wish to build. Now we'll conclude with a bigger example of a Makefile:
# Assign variables. Doing this makes the Makefile easier to
# modify if a path is incorrect, or another compiler is used.
LINKFLAGS=-L/usr/X11R6/lib -R/usr/X11R6/lib
# NB! Don't assign CC, like many linux folks do it makes impossible
# to use another compiler without changing the source
# If an 'all' target is present, it is always executed by default
# (when you just type 'make') even if it's not the first target.
all: myprog
# In the following, '${CC}' will expand to the contents of the
# variable 'CC'.
myprog: first.o second.o
${CC} ${LINKFLAGS} -lX11 first.o second.o -o myprog
first.o: first.c
${CC} -c first.c
second.o: second.c
${CC} -c second.c
As you can see, you can do rather interesting things with Make.
Consider using BSD Make scripts, they simplify handling projects a lot. System builds using them.
Using BSD Make
Makefiles are nice, but typing the same lines all the time can get very annoying, even if you use SUFFIXES.
BSD's Make is a very nice Make, which comes pre-packed with some files which can make your life a lot easier and your Makefiles more elegant, but it is not compatible with GNU make. Some things are even incompatible between the Makes of different versions of BSD. Now we got that out of the way, let's see an example:
PROG= test
SRCS= test_a.c test_b.c
# We have written no manpage yet, so tell Make not to try and
# build it from nroff sources. If you do have a manpage, you
# usually won't need this line since the default name
# of the manpage is ${PROG}.1 .
MAN=
.include <bsd.prog.mk>
That's all there's to it! Put this in the same directory as the 'test' program's sources and you're good to go.
If you're on a non-BSD system, chances are that the normal 'make' program will choke on this file. In that case, the BSD Make might be installed as 'pmake', or 'bmake'. On Mac OS X, BSD make is called 'bsdmake', the default 'make' is GNU Make. If you can't find make on your particular system, ask your administrator about it.
The bsd.prog.mk file (in /usr/share/mk) does all the work of building the program, taking care of dependencies etc. This file also makes available a plethora of targets, like 'all', 'clean', 'distclean' and even 'install'. A good BSD Make implementation will even call 'lint' on your source files to ensure your code is nice and clean.
If you wish to add flags to the C compiler, the clean way to do it is like this:
CFLAGS+= -I/usr/X11R6/include
For the linker, this is done by
LDADD= -lX11
If you're adding libraries or include paths, be sure to make lint know about them:
LINTFLAGS+= -lX11 -I/usr/X11R6/include
If you're creating a library, the Makefile looks slightly different:
LIB= mylib
SRCS= mylib_a.c mylib_b.c
.include <bsd.lib.mk>
A library doesn't have a manpage by default. You can force one to be built by supplying a MAN line, of course.
As you can see, the BSD Make system is extremely elegant for large projects. For simple projects also, but only if you have one program per directory. The system does not handle multiple programs in one directory at all. Of course, in large projects, using directories for each program is a must to keep the project structured, so this shouldn't be a major problem.
The main directory of a project should contain this Makefile:
SUBDIR= test mylib
.include <bsd.subdir.mk>
Additionally, bsd.prog.mk and bsd.lib.mk always include the file ../Makefile.inc, so you can keep global settings (like DEBUG switches etc) in a Makefile.inc file at toplevel.
For more information, usually there is a /usr/share/mk/bsd.README file which explains BSD Make more completely than this little document. See also ?BSD Make.
Tip: Use LaTeX-mk if you want to build LaTeX sources with this type of BSD Makefiles.
Debugging programs with gdb
Now you know how to compile and link a program, but debugging is often very useful as well. I'll quickly explain some common things one can do with gdb, the GNU debugger.
At first, when you see someone using the debugger, it looks like yet another black art that can be done by gurus only, but it's not really too difficult, especially considering it's a command line debugger. If you practice a bit, using gdb will become a second nature.
First, it is handy to compile a program with debugging symbols. This means the debugger knows about the variable and function names you use in your program. To do this, use the -g switch for gcc when compiling:
$ gcc -g blah.c -o program
For each object, you must use -g on compilation. On linking -g can beomitted. Now, run the debugger:
$ gdb program
<output and prompt>
To run the program, just type 'run'. To load a new binary into gdb, use "file ". For information about possible commands, use 'help'.
When your program crashes, you can use the command 'bt' to examine the stack. With 'select ' you can select stack frame N. For example, suppose you're in the middle of a function foo which is called from function bar, you can switch to bar by going down a step in the stack. Stack frame 0 is always the last called function.
With 'l' (or 'list') you can list a certain file, at a certain point. By default 'l' shows the point where the debugger has halted right now. It always lists 10 lines of context, with the middle line being the current line. If you wish to see some other file or some other line, use 'l file.c:line', with file.c being the file to look at, line the number of the line. With every consecutive 'l', the next lines will be shown, so you can scroll through the program by hitting 'l' all the time.
The nice thing about gdb is, that when you simply press enter, it executes the last command. So you need to type 'l' only once, and keep pressing enter until you see the right piece of code. This is especially useful when using 'step' (see below).
To investigate a variable, just use 'print '. You can print contents of pointers in the same fashion as you would in C itself, by prefixing it with a *. You can also cast to int, char etc if you wish to see only that many bits of the memory (useful when investigating buffers). Printing a variable only works if you've selected a stack frame in which the variable has a meaning. So if function foo has an integer called 'i' and function bar also has one, it depends on the selected stack frame which of the two i's contents you get to see.
With 'break' you can set breakpoints. The program being debugged will stop when it gets to the line of the breakpoint. Breakpoints work like listings when giving line numbers/filenames. You can clear a breakpoint by issuing the same, but with 'break' replaced by 'clear'.
To continue the program, use 'continue'. This just resumes executing the program from the point where it halted. If the program really crashed, continue is of course not possible.
To walk through a program line by line, use 'step'. This way, after each line is executed, you can investigate how variables changed. It is common practice to set a breakpoint at a place in the program where you expect a problem to occur. When that breakpoint is hit, you can use 'step' to step through the failing part of the program.
If you think a certain function will work, use 'finish', which is like a continue, but only for the current function. When the previous function is recalled from the stack, the program will stop.
With 'kill' you can immediately kill your program. 'quit' quits gdb.
References
Contents
Preface
Before you start to do user and group management you must:
- For security reasons, create substitute user and name it as you like, here it is referred as noroot:
# useradd -m -G wheel _noroot_
- Set password for noroot user:
# passwd _noroot_
Exit and log in as noroot user.
Use the
su
command to obtain the root privileges for noroot:
$ su
- Forget to use the
root
for maintenance or regular administration of the system. You free to find any secure and convenient spot for the root password be available upon your need.
If your favorite user with login password is already assigned in the system and no need to create new one. Omit first steps from above. Do modify user information by adding your no root user into the wheel group and su
anytime per your desire:
# usermod -G wheel _noroot_
User
The NetBSD maintains information in regard of each user who logs into, access system, runs processes on so forth. This include and not limited to:
- user name
- password
- group
- base_dir
- skel_dir
- shell
- class
- homeperm
- inactive
- expire
The superuser called root has no limitations on its privileges.
To limit user priveleges consider to set limits by: coredumpsize, cputime, filesize, quota, maxproc, memory, openfiles etc.
[user(8)](//man.NetBSD.org/user.8)
is frontend to the useradd, usermod, userinfo and userdel commands, it helps to manage users in the system.
Use id(1) to see user identity:
$ id
Use w(1) to see who present and what they are doing:
$ w
Use last(1) to see last logins:
$ last
useradd(8)
To add user do:
user add [options] _user_
To add a user and create a new home directory:
# useradd -m _myuser_
Look into the NetBSD Guide Chapter 5.6
userinfo(8)
To see user information do:
$ userinfo _myuser_
usermod(8)
To modify existing user login do:
# user mod [options] _user_
# usermod -C yes _username_ ; set Close lock on user account
# usermod -C no _username_ ; unlock user account
# usermod -G wheel _username_ ; add user to group _wheel_
# usermod -s /sbin/nologin _username_ ; remove login shell
# usermod -s /bin/sh _username_ ; set login shell
# usermod -F _username_ ; force user to change password
userdel(8)
To remove a user from the system do:
# userdel _myuser_
passwd(5)
To see a list of all users in the system do:
$ cat /etc/passwd
To edit /etc/passwd file do:
# vipw
chpass(1)
Use chpass, chfn, and chsh (chpass(1)) to add or change user database information.
To change the shell of myuser, for an exapmle to /bin/ksh:
# chpass -s /bin/ksh _myuser_
Group
To manage groups check /etc/group
file which maintains name of each group, group id and list of users who is a group member.
[group(8)](//man.NetBSD.org/group.8)
is frontend to the groupadd, groupmod, groupinfo and groupdel commands, it helps to manage groups in the system.
To add group do:
group add [options] _group_
To delete group do:
group del [options] _group_
To obtain group information do:
group info [options] _group_
To modify existing group do:
group mod [options] _group_
To remove user from the group you have to do user del
and then add user again.
groupadd(8)
groupdel(8)
groupinfo(8)
groupmod(8)
Other
chmod(1)
chown(8)
To change files/directory ownership:
#chown -R myuser path
Where myuser is the name of user and path is directory where files are located.
chgrp(1)
chroot(8)
quota(1)
Use quota to set users quotas per desire.
See also
Contents
Introduction
This How-To describes the process of installing a SpeedTouch 330 (PPPoA) in NetBSD 3.1, but the instructions given should apply also to older versions. The SpeedTouch 330 has been supported in NetBSD since version 1.5 of the OS.
The things you need to know before proceeding
You need know the VPI and VCI numbers of your provider.
Some VPIs and VCIs are listed here:
<http://www.linux-usb.org/SpeedTouch/faq/index.html#q12>
You will also need your login and password to connect to your ISP.
For Neostrada in Poland the login is: @neostrada.pl
Getting the userspace tools
It's a good idea to download the userspace tools before installing NetBSD.
Links to the NetBSD-3.1 userspace tools binaries:
<http://ftp.netbsd.org/pub/pkgsrc/packages-2006Q4/NetBSD-3.1/i386/net/userppp-001107nb1.tgz>
<http://ftp.netbsd.org/pub/pkgsrc/packages-2006Q4/NetBSD-3.1/i386/net/speedtouch-1.3.1nb4.tgz>
Links to the NetBSD-2.1 userspace tools binaries:
<http://ftp.netbsd.org/pub/pkgsrc/packages-2006Q4/NetBSD-2.1/i386/net/userppp-001107nb1.tgz>
<http://ftp.netbsd.org/pub/pkgsrc/packages-2006Q4/NetBSD-2.1/i386/net/speedtouch-1.3.1nb4.tgz>
Once downloaded, store the files on a floppy or burn them to a CD-R(W) disc.
Installing the user space tools
If you have the tools on a floppy, execute as root:
mount -t msdos /dev/fd0a <the directory where you mount your floppy drive>
cd <the dir where you mount your floppy>
pkg_add userppp-001107nb1.tgz
pkg_add speedtouch-1.3.1nb4.tgz
If you have the tools on a CD:
mount -t cd9660 /dev/cd0a <the directory where you mount your cdrom>
cd <the dir where you mount your cdrom>
pkg_add userppp-001107nb1.tgz
pkg_add speedtouch-1.3.1nb4.tgz
If you have them elsewhere, I'm sure that you know what to do
Configuring the tools
Create a file 'ppp.conf' and paste the code below to it:
default:
ident user-ppp VERSION (built COMPILATIONDATE)
set log Phase Chat IPCP CCP tun command
set ifaddr 10.0.0.1/0 10.0.0.2/0 255.255.255.0 0.0.0.0
set login
adsl:
set authname <LOGIN>
set authkey <PASSWORD>
set device !"/usr/pkg/sbin/pppoa3 -c -m 1 -vpi <VPI> -vci <VCI> -d /dev/ugen0"
accept chap
set speed sync
set timeout 0
set reconnect 10 100
add default HISADDR
enable dns
Now replace with your login, with your password, and with your provider's vpi and vci numbers respectively.
Now copy the file to /usr/pkg/etc/ppp/ppp.conf
mkdir /usr/pkg/etc/ppp
cp ppp.conf /usr/pkg/etc/ppp/ppp.conf
Starting the connection
Issue as root:
cp /usr/pkg/share/examples/rc.d/adsl /etc/rc.d
cd /etc/rc.d
./adsl forcestart
You should have a working connection now Check it by issuing:
ping www.netbsd.org
If something went wrong look for clues in the file /var/log/messages. And if you still have problems, feel free to drop me an e-mail with the description of the problem with the output of /var/log/messages.
My e-mail: ayrie3 (at) gmail (dot) com
To start the connection automatically at boot-time issue as root:
echo "adsl=YES" >> /etc/rc.conf
Thanks
nzk @ NetBSD (freenode) for helping me out with the linguistic aspect of the article.
Contents
Introduction
Build variables like CFLAGS, LDFLAGS, CPUFLAGS etc. are used to override default build options and to take advantage of special compiler options available on some target hardware platforms. The build variables are usually supplied through shell variables, make options or mk.conf, so original build infrastructure can remain clean of hacks. Most build options can be applied globally to the whole NetBSD userspace or to some library, program or source file for example, but sometimes the special options need a few exception within the source tree. This is where per-source-file build options override comes in handy.
Per source file build option override
For example the CPUFLAGS build variable can be changed to depend on the source file being compiled:
CPUFLAGS_source_file_name_without_white_space_and_extension=-mnormal_arch \
DEF_CPUFLAGS=-mspecial_arch \
CPUFLAGS='${CPUFLAGS_${.IMPSRC:T:R}:U${DEF_CPUFLAGS}}' \
nbmake
When the BSD make builds a C file (or the build command uses the CPUFLAGS variable), a check is made if a variable named CPUFLAGS_source_file_name_without_white_space_and_extension exists. If this variable exists, its contents is used as CPUFLAGS, and if it does not exist, DEF_CPUFLAGS is used instead. See make manual for more variable name filters, regexp possibilities and other predifined make variables, which may all come in handy.
Example: libc cross-build for ARM/thumb, except for atomic_init_testset.c
$ cd lib/libc
$ CPUFLAGS_atomic_init_testset=-mthumb-interwork \
> DEF_CPUFLAGS='-mthumb -mthumb-interwork' \
> CPUFLAGS='${CPUFLAGS_${.IMPSRC:T:R}:U${DEF_CPUFLAGS}}' \
> $TOOLDIR/bin/nbmake-evbarm
Refences
Contents
What is WPA/WPA2?
Wi-Fi Protected Access (WPA) and Wi-Fi Protected Accesss II (WPA2) are 802.11 wireless authentication and encryption standards, the successors to the simpler Wired Equivalent Privacy (WEP). Most "closed" or "locked" 802.11 wireless networks use WPA/WPA2 authentication. On NetBSD, the wpa_supplicant(8) daemon handles WPA/WPA2.
To configure WPA/WPA2, you must create the file /etc/wpa_supplicant.conf
(wpa_supplicant.conf(5)).
You can find examples for /etc/wpa_supplicant.conf
in
/usr/share/examples/wpa_supplicant/wpa_supplicant.conf
.
The simplest case is a network, say my favourite network
, with a
fixed passphrase, say hunter2
.
For this case, fill your /etc/wpa_supplicant.conf
file with:
ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=wheel
network={
ssid="my favourite network"
psk="hunter2"
}
Then enable wpa_supplicant on your network interface device, say
iwn0
, by editing /etc/rc.conf
(rc.conf(5))
to add
wpa_supplicant=YES
wpa_supplicant_flags="-i iwn0 -c /etc/wpa_supplicant.conf"
If your LAN is configured with DHCP, you will likely also want
dhcpcd=YES
in /etc/rc.conf
to run dhcpcd(8).
Then start wpa_supplicant with the shell command:
# /etc/rc.d/wpa_supplicant start
or reboot for the change to take effect.
You can query the current status of WPA/WPA2 with the shell command:
# wpa_cli status
If you want to configure more 802.11 networks, add more network
stanzas to /etc/wpa_supplicant.conf
, and notify wpa_supplicant of
them:
# /etc/rc.d/wpa_supplicant reload
Do not wait for lease; useful if no network is within reach, so boot will not hang
For a typical laptop, you will usually want to use DHCP to get an IP
address on any network you're on, but you won't always be on the
network.
In that case, when you're booting up, you don't want to have to wait
until you can associate with the network and get a DHCP lease.
You can pass the -b
flag to
dhcpcd(8)
to make it immediately go into the background, by setting
dhcpcd_flags
in /etc/rc.conf
:
dhcpcd_flags="${dhcpcd_flags} -b"
Other Network Configurations
wpa_supplicant can also connect to other wireless network
configurations.
These networks can be given different priorities using the priority
field, with a higher number indicating a higher priority.
Hidden Networks
If the network is hidden, so that the access point does not broadcast
its presence, you must specify the scan_ssid=1
option:
network={
ssid="my network"
scan_ssid=1
psk="sekret"
}
Open Networks
network={
ssid="MYUNPROTECTEDWLAN"
key_mgmt=NONE
priority=100
}
WEP encryption
WEP is the weakest of current 802.11 encryption solutions. It is known to be completely broken: breaking WEP can be done in mere seconds. However, sometimes there is a need to use WEP in legacy networks. Here is a configuration if you want to do it with wpa_supplicant:
network={
ssid="MYWEAKLYENCRYPTEDWLAN"
key_mgmt=NONE
wep_key0="12345" # or 13 characters, or a hexkey starting with 0x
wep_tx_keyidx=0
}
Note that you don't have to use wpa_supplicant to configure WEP -- you can also simply use ifconfig(8):
ifconfig ath0 ssid MYWEAKLYENCRYPTEDWLAN nwkey 12345
Password-Authenticated MSCHAPv2
This seems to be a common configuration for password-authenticated networks:
network={
ssid="WLANSSID"
key_mgmt=IEEE8021X
eap=PEAP
phase2="auth=MSCHAPV2"
identity="login"
password="password"
}
See also
Introduction
Normally a user can use the ps command to see any process running on a system.
A nice additional security measure is to hide these processes.
A user may be able to "see" only her own processes thanks to the security.curtain sysctl option.
Execution
Run following command as a user with root privileges:
# sysctl -w security.curtain=1
Now type as normal user:
# ps auxww
You should be able to see only processes belonging to your user.
Picture Transfer Protocol
Please note this is what worked for me, your milage may vary
I have an KODAK EasyShare CX7300 Digital Camera, and I wanted to get the images off of it. I plugged in the camera, via a USB cable, to my laptop. NetBSD recognized the camera as:
ugen0: Eastman Kodak Company KODAK EasyShare CX7300 Digital Camera
So far so good!
Now I needed some software that could access the camera and pull the images off the camera. After a little browsing in /pkrsrc/graphics
I decided on gphoto2, the CLI version of gphoto. Assuming your pkgsrc source tree is located in /usr/pkgsrc
, you may install it with:
cd /usr/pkgsrc/graphics/gphoto2
# make install
Once installed, simply run:
gphoto2 --get-all-files
to fetch all the files from your camera and place them in the current dir.
USB Mass Storage Protocol
Many cameras can be configured to look like disk drives, instead of the camera mode discussed above. When plugging them in to USB, you will see in dmesg umass, scsibus, and then sdN. Mount sdNe as an msdos filesystem and copy your pictures. See ?Mounting a Windows file system.
Card Readers
There are many USB card readers that accept CF, SD, etc. and plug into USB. These appear as umass/scsibus/sd and can be mounted as disk drives. The Sandisk ImageMate 12-in-1, at least the version purchased in late 2005, is known to work well.
Since 5.0 NetBSD supports changing the MAC address.
Add a link-layer address to an Ethernet:
# ifconfig sip0 link 00:11:22:33:44:55
Add and activate a link-layer address:
# ifconfig sip0 link 00:11:22:33:44:55 active
This should give some idea:
ifconfig bridge0 create
ifconfig bridge0 up
ifconfig vr0 up
brconfig bridge0 add vr0
ifconfig tap0 create
sysctl -w net.link.tap.tap0=00:00:de:ad:be:ef
ifconfig tap0 up
brconfig bridge0 add tap0
Now you can get an ip if from dhcp with the fake MAC:
dhclient tap0
See also
How to throttle your CPU
Basic support is in generic kernels by default. sysctl and sysutils/estd do the rest.
Contents
i386
AMD PowerNow!
options POWERNOW_K7
or
options POWERNOW_K8
accessible through e.g.
sysctl machdep.powernow.frequency.current
sysctl machdep.powernow.frequency.available
sysctl -w machdep.powernow.frequency.target=600
Intel Speedstep
options ENHANCED_SPEEDSTEP
accessible through e.g.
sysctl machdep.est.frequency.current
sysctl machdep.est.frequency.available
sysctl -w machdep.est.frequency.target=600
speedstep-ich
Available since NetBSD 2.0
ichlpcib* at pci? dev ? function ? isa0 at ichlpcib?
accessible through
sysctl -w machdep.speedstep_state=[0/1]
where 0 is a low state and 1 is a high state.
speedstep-smi
Available since NetBSD 4.0
piixpcib* at pci? dev ? function ? isa0 at piixpcib?
also accessible through
sysctl -w machdep.speedstep_state=[0/1]
Transmeta longrun
- Should be activated by default
accessible through
machdep.tm_longrun_mode
machdep.tm_longrun_frequency
machdep.tm_longrun_voltage
machdep.tm_percentage
amd64
Cool'n'Quiet
Same procedure as with PowerNow!:
options POWERNOW_K8
accessible through e.g.
sysctl machdep.powernow.frequency.current
sysctl machdep.powernow.frequency.available
sysctl -w machdep.powernow.frequency.target=600
Setting up estd for automatic scaling
The estd daemon dynamically sets the frequency on SpeedStep and PowerNow!-enabled CPUs depending on current CPU-utilization. It is written for systems running NetBSD or DragonFly.
cd /usr/pkgsrc/sysutils/estd
make install clean
To make it start at boot-time
cp /usr/pkg/share/examples/rc.d/estd /etc/rc.d/
chmod +x /etc/rc.d/estd
and add to /etc/rc.conf
estd="yes"
estd_flags=""
then as root
/etc/rc.d/estd restart
- Examples
Maximize battery lifetime by limiting CPU-frequency to 1000 MHz and switching to lower speeds fast:
estd_flags="-d -b -M 1000"
Maximize performance by running at least at 1400MHz and switching to higher speeds real fast:
estd_flags="-d -a -m 1400"
Alternatively, you can start estd directly from /etc/rc.local instead of rc.conf/rc.d.
More in the manual.
Contents
A simple example of ALTqd in action
The problem was that I needed to make sure that all UDP traffic for my SIP VoIP phones was handled with a higher priority than all the normal low priority internet traffic such as web requests. Altq is a very powerful that can do this and a lot of other things. It was invented as part of the KAME project. On NetBSD it works by enabling it in the kernel and then running a user-space daemon to implement your shaping rules.
Getting the kernel ready to use Altqd
You'll need to make sure your kernel supports ALTQd. It almost certainly will not do so by default. The GENERIC
kernel or any of it's variants like GENERIC.MP
will not support altq out of the box. You'll need to create a new kernel with all the ALTQ features enabled. Recompiling your kernel is not really hard, but it's outside the scope of what I'm trying to explain here. You can find detailed descriptions of how to do it on this wiki or in the NetBSD guide. The way to enable ALTQ in your new kernel is to uncomment all the ALTQ lines you'll find a block there. Once you have installed the new kernel. Reboot your computer.
Enabling ALTQd
You'll need to create an empty /etc/altq.conf
file for starters, then edit your /etc/rc.conf
and add a line at the bottom that says altqd=yes
. Reboot or run /etc/rc.d/altqd start
Now configure it
You now want to setup your altqd
classes and filters. The class tells altqd
what kind of traffic shaping you want to do. There are about five different types and they all have their uses for various situations. In my case I knew that SIP traffic was all UDP, I didn't bother with any kind of complex filters. I would notice that SIP audio quality would fall anytime people were really cranking on web pages and downloading stuff. So here is what I used in my /etc/altq.conf
# We have a 5mbit link to our ISP over a fiber-ethernet connection
#
interface bge0 bandwidth 5M priq
# Create a class for important traffic
class priq bge0 high_class NULL priority 2
# We raise the priority of all UDP traffic. The only significant
# type we will see is VoIP and that should be fine.
filter bge0 high_class 0 0 0 0 17
# All other traffic is lower priority than VoIP
class priq bge0 low_class NULL priority 0 default
What the config file meant
Okay, let's face it. The configuration is somewhat less than straightforward. So let's break down what we did.
interface bge0 bandwidth 5M priq
This line says that we intend on using a the first broadcom ethernet interaface as the outgoing WAN interface to shape the traffic on. In order for for ALTQ algorithms to work, most of them need to know how much bandwidth that line has. Even though that interface is capable of 1000mbit (it's a gigabit copper port) it's connected to a fiber optic media converter which is connected to my ISP. The ISP only gives me the capability to transmit at 5mbit. So, I don't want ALTQ to assume we are dealing with a gigabit link here. Otherwise, I suspect our shaping rules wouldn't work at all. Also, you'll notice that I keep talking about this being the WAN interface. Shaping traffic on the LAN interface of this firewall (it does NAT using IP filter) might work as well. However, I prefer to shape it at the last possible moment before it leaves my control. The authors of the altq.conf(5) man page seem to agree with this approach.
class priq bge0 high_class NULL priority 2
This line says that we want to create a new parent-class for ALTQ. The parent class tells ALTQ what kind of shaping we'll be doing. In this case it's priq
which is shorthand for priority queuing. This algorithm makes sure that network buffers with my chosen type of traffic will be separated out by using filters, and the ones with higher priorities will be emptied first. Moving on we see that the class applies to the interface bge0
which is good since we just defined that above. Next is a simple user-defined label for this class. I chose the string high_class
so it would be clear. You can call it anything you want. I could have chose voip_class
or udp_traffic
. It doesn't matter. You'll notice the next item is NULL
. That's because this class is a parent class and thus it has no parent class of it's own. The man-page was helpful enough to tell me what to use there. The last two items priority 2
are pretty clear. I'm simply telling this class what priority it is. The higher the value, the more priority it has. In most cases I believe you can assign an integer value of 0-15 with 15 being the highest priority.
Now, we need a way to add certain types of traffic into this class. We'll use the altq
filter syntax to do this. That's where the next line comes in
filter bge0 high_class 0 0 0 0 17
The first three items should be clear enough. We are writing a filter for interface bge0 and we want traffic that matches the filter to fall under the pervue of the high_class
class. The integers that follow are the part were I wish the author would have been just a tad more IP-Filterish or PF-like. Here is the template for those from the altq.conf(5) man page:
dst_addr [netmask mask] dport src_addr [netmask mask] sport proto [tos value [tosmask value]] [gpi value
The value "0" means "any". All the stuff in square braces is optional and not used in my example. So I'm saying "from any host on any port to any host on any port using protocol UDP". What about that spurious 17 at the end you say? It's the protocol number for UDP. You can find all such numbers in your /etc/protocols
file. Now let's move on to the last line:
class priq bge0 low_class NULL priority 0 default
With the priq
algorithm there must be a default class. This is because you aren't going to want to have a filter for every conceivable type of traffic. We assign this class a lower priority than the class we just made for UDP. Also note that you don't have to write a filter for the default class as that would kind of defeat the whole purpose of having a default class.
Monitoring Your ALTQ Classes in Realtime
You can use the tool altqstat(1) to see traffic flowing through your classes in realtime. I'd highly recommend doing so and testing to make sure things are getting filtered where you want them. Just kick off altqstat(1) from the command line and let it run. The second time it spits out some data, it'll start giving you kb/s figures which are very useful. Here is an example from mine:
bge0:
[high_class] handle:0x3 pri:2
measured: 32.46kbps qlen: 0 period:4955623
packets:4971367 (563368265 bytes) drops:0
[low_class] handle:0x1 pri:0
measured: 439.47kbps qlen: 0 period:8783713
packets:9110912 (3917080357 bytes) drops:1338
You can see that my phones are taking 32.45kbps and everything else is consuming 439.47kbps. Had I not setup altq
with priq
, my UDP phone traffic would be drowning in a sea of http request. Instead, my calls are getting their packets out the door precious fractions faster and that's making a big difference in my call quality. Yeah!
Contents
Install All Utilities
Install meta-pkgs/pulseaudio-tools.
Setup
Pre-requisites
pulseaudio needs dbus to run. So if it's not running already:
cp /usr/pkg/share/examples/rc.d/dbus /etc/rc.d
echo dbus=YES >> /etc/rc.conf
Third Party Applications
ALSA Applications
Ignore on NetBSD?
Amarok (audio/amarok)
Install audio/xine-pulse, select Xine engine and 'pulseaudio' backend.
Audacious (audio/audacious)
Not tested.
Audacity (audio/audacity)
'padsp' method works. 'pasuspender' crashes the daemon on resume (will be fixed in 0.9.15)
ESOUND Applications
Create symlink manually:
mkdir -p /tmp/.esd && ln -s /tmp/.esd-${UID} /tmp/.esd/sound
Flash Player (multimedia/ns-flash)
Install multimedia/libflashsupport-pulse on i386 and amd64. Flash 7 on sparc not tested.
GNOME
Works via GStreamer.
GStreamer Applications
Install audio/gst-plugins0.10-pulse.
iaxComm
Not in pkgsrc.
KDE
pulseaudio crashes in protocol-esound.c:do_work(). Probably the same bug as pa#463.
libao Applications
Install audio/libao-pulse.
MPD (audio/musicpd)
Compile musicpd with the default-off pulseaudio option enabled.
Configure a matching audio_output section in mpd.conf:
audio_output {
type "pulse"
name "Pulseaudio"
}
MPlayer (multimedia/mplayer)
pulseaudio support added in 1.0rc10nb12 and works.
$ mplayer -ao pulse myvideo.avi
or add the line
ao=pulse
to .mplayer/config.
If you have audio/video sync problems, you can modify the sync with the plus ('+') and minus ('-') keys.
MPlayer plug-in (multimedia/mplayer-plugin-*)
Not tested, but should work as described.
MythTV (wip/mythtv)
WIP package is obsolete and does not build on NetBSD.
OpenAL Applications
Not tested, but should work as described.
OSS Applications
'padsp' works.
SDL (devel/SDL)
1.2.12 in pkgsrc. Works fine.
Skype (net/skype)
Not tested.
Teeworlds
Not in pkgsrc.
TiMidity++ (audio/timidity)
Works via libao -- install timidity-2.13.2nb10 and libao-pulse.
Totem (multimedia/totem)
Works via GStreamer.
VideoLAN (multimedia/vlc)
0.9.8a in pkgsrc, pulseaudio supported. Not tested, but should work as described.
wavbreaker
Not in pkgsrc.
WINE (emulators/wine)
Not tested.
Xine
Install audio/xine-pulse.
XMMS
xmms-pulse not in pkgsrc; xmms-esound works, but can crash pulseaudio.
Contents
Introduction
The purpose of this document is to guide you on how to track NetBSD CVS source repository with git. We are going to make a "bulk import" and by this is meant that not all the CVS checkins with comments are going to be imported. If we wanted so, we would use the git-cvsimport(1) tool.
Download current source
To begin with, we must have a current CVS snapshot of the NetBSD source tree. I assume you already have one in /usr/src-git
. If not, please consult this article in the official NetBSD documentation page.
Symbiosis
Since both CVS and Git are going to live together inside the same directory, it is essential to hide each one from the other. That is because we don't want CVS to track git files or vice versa. For this purpose we need two files inside the root directory of our source tree.
.cvsignore
Create a file named .cvsignore
inside /usr/src-git
and put the following lines in it:
.git/
.gitignore
.gitignore
Create a file named .gitignore
inside /usr/src-git
and put the following lines in it:
CVS/
.cvsignore
Initial import of NetBSD CVS source repository
# cd /usr/src-git
# git init
Initialized empty Git repository in /usr/src-git/.git/
# git add .
# git commit -m "Initial import of the NetBSD CVS source repository"
Keeping the git repository uptodate with CVS
From time to time, we update the main git repository with the latest changes from CVS.
# cvs update -dPA
# git add .
# git commit -a -m "Update to latest CVS snapshot"
Done
You can examine the log:
# cd /usr/src-git
# git log
commit 9fe7a0469a8ef58fd5b484c0931808a2d23f559e
Author: Charlie <root@netbsd.(none)>
Date: Mon Nov 24 01:17:31 2008 +0200
Update to latest CVS snapshot
commit 78189d3fa331073021eaea88d1f51f3487d756b7
Author: Charlie <root@netbsd.(none)>
Date: Mon Nov 24 00:59:47 2008 +0200
Initial import of the NetBSD CVS source repository
#
Or create a crazy branch to work on, offline, without being embarassed if it ends up as a dead end.
# cd /usr/src-git
# git branch
* master
# git checkout -b crazy-branch
Switched to a new branch "crazy-branch"
# git branch -a
* crazy-branch
master
#
References
Google turned up nothing when I was searching for how to use ccache with NetBSD's build.sh script. My "solutions" are fairly ugly but hopefully point someone in the right direction. There may very well be better ways of doing things, but this works for me.
(note: These steps were used for a cross-compilation of NetBSD_4.0/i386 on a FreeBSD_6.2/i386 host. The basic ideas should be fairly generic and applicable to other host/target pairs, including native NetBSD builds. The basic ideas might also help in getting build.sh to use distcc)
The goal is to use ccache for c/c++ compiles done by build.sh (the build.sh "target" e.g. "release", "distribution", etc. should not matter, any target that compiles c/c++ code can potentially benefit from using ccache).
This goal can be achieved by realizing 2 subgoals:
Objective 1) - make build.sh use ccache for HOST_CC/HOST_CXX (host compiler)
Objective 2) - make build.sh use ccache for CC/CXX (target compiler)
e.g. when compiling NetBSD on a FreeBSD system, HOST_CC/HOST_CXX point to a FreeBSD compiler, which will build a NetBSD cross-compiler (CC/CXX) that runs on the host system.
For objective 1), my issue turned out to be that there are some Makefiles in the NetBSD sources that prefix some commands with /usr/bin/env -i
, which clears the environment. In my case, my ccache command invocation requires CCACHE_DIR/CCACHE_PATH/PATH to be set appropriately, which /usr/bin/env -i
breaks.
Fair is fair, so my workaround was simply to use the env command myself in HOST_CC/HOST_CXX:
export HOST_CC='env CCACHE_DIR=/whereever CCACHE_PATH=/whereever PATH=/whereever /usr/local/libexec/ccache/cc'
Note: you might have quoting issues if CCACHE_DIR/CCACHE_PATH/PATH contain space characters. Such issues are beyond the scope of this document.
Objective 2) is a bit hairer.
My first approach was simply to stick
CC = <ccache_stuff> ${CC}
CXX = <ccache_stuff> ${CXX}
in a $MAKECONF file, and point build.sh at that. This fails because (near as I can tell) in XXX/src/share/mk/bsd.own.mk around line 199 (w/ NetBSD 4.0 sources) there are lines of the form:
if ${USETOOLS_GCC:Uyes} == "yes" # {
CC= ${TOOLDIR}/bin/${MACHINE_GNU_PLATFORM}-gcc
CPP= ${TOOLDIR}/bin/${MACHINE_GNU_PLATFORM}-cpp
CXX= ${TOOLDIR}/bin/${MACHINE_GNU_PLATFORM}-c++
...
Even though $MAKECONF is included at the top of bsd.own.mk, these lines will override whatever $MAKECONF sets CC and friends to.
Although I tried to avoid patching the sources at all (I build from a script, trying to automate things) I caved and added a line at line 208 in XXX/src/share/mk/bsd.own.mk.
.endif # EXTERNAL_TOOLCHAIN # }
# below line was added
.-include "${MAKECONF}"
to force bsd.own.mk to use my CC/CXX values from my $MAKECONF.
At the least, you will probably need to ensure that
CCACHE_PATH='"$tool_dir"'/bin'
PATH='"$tool_dir"'/bin:'"$PATH"
are in the environment for CC/CXX. In contrast, $tool_dir/bin is NOT needed in these vars for HOST_CC/HOST_CXX.
NOTE: $tool_dir can be specified to build.sh via ``-T
Finally, when I had a $MAKECONF with:
CC = /usr/bin/env \
CCACHE_DIR=<wherever> \
CCACHE_PATH=<wherever> \
PATH=<whatever> \
/usr/local/bin/ccache \
<tool_dir>/bin/<target_arch>--netbsd<target_objformat>-gcc
(sans backslashes and newlines) and thought I had won, my compile seemed to hang forever. Not sure what caused this.
Anyhow, I ended up creating CC/CXX wrapper scripts (well, changing my build script that calls build.sh to create wrappers)
My CC/CXX scripts are just (sans backslashes and newlines):
#! /bin/sh
# fill in with yer own paths, target arch., etc.
exec /usr/bin/env \
CCACHE_DIR=/usr/obj/ccache \
CCACHE_PATH=/XXX/TOOLS/bin \
PATH=/XXX/TOOLS/bin:<rest of $PATH> \
/usr/local/bin/ccache \
/XXX/TOOLS/bin/<arch>--netbsd<obj_format>-<gcc/c++> \
"$@"
NOTE: "$@" is important. $* will not handle multiple args containing spaces correctly.
And the $MAKECONF I (my script) passes to build.sh is simply
CC = /xxx/path_to_cc_wrapper
CXX = /xxx/path_to_cxx_wrapper
YMMV, but this setup works for me. If anyone knows better ways to do things, feel free to update this guide with your way of doing things. In particular, a method that does not require patching NetBSD sources at all, even if it is just a single line.
Contents
Tuning the kernel
Process and file descriptor limits
Before reading:
These are mostly only demonstative values on how to tune your system for different needs. They are not some kind of an ultimate optional values. This article mostly aims to provide a quick overview on the ways to fine tune your system settings and being aware of the limitations.
maxusers
The name is a bit misleading, because it doesn't set the number of users on the system, but used in the formula to calculate maximal number of allowed processes.
You can find it in your kernel configuration file, something like this:
maxusers 32
This is the default value, so if we look at the formulae we get process limit values:
/usr/src/sys/param.h:
#define NPROC (20 + 16 * MAXUSERS)
/usr/src/sys/conf/param.c:
#define MAXFILES (3 * (NPROC + MAXUSERS) + 80)
So we got 532 for NPROC (maximal number of processes) and 1772 for MAXFILES (maximal number of open file descriptors).
Some say that the maxusers should be set to the amount of RAM in megabytes.
For reference, FreeBSD sets is automaticaly by this formula, but limits it's maximum to 384.
Setting it to 64 is always a safe bet if you don't want too much experimenting. Just change it in your kernel configuration file:
maxusers 64
Compile the new kernel with build.sh or manualy, install the new kernel and reboot.
You can check your limits with sysctl:
With maxusers 32
$ sysctl proc.curproc.rlimit.maxproc
proc.curproc.rlimit.maxproc.soft = 160
proc.curproc.rlimit.maxproc.hard = 532
$ sysctl proc.curproc.rlimit.descriptors
proc.curproc.rlimit.descriptors.soft = 64
proc.curproc.rlimit.descriptors.hard = 1772
With maxusers 64
You can check your limits with sysctl:
$ sysctl proc.curproc.rlimit.maxproc
proc.curproc.rlimit.maxproc.soft = 160
proc.curproc.rlimit.maxproc.hard = 1044
$ sysctl proc.curproc.rlimit.descriptors
proc.curproc.rlimit.descriptors.soft = 64
proc.curproc.rlimit.descriptors.hard = 3404
login.conf
So you can change the hard limits now. Let's see the soft limits.
or with ulimit:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) 131072
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 80920
max memory size (kbytes, -m) 242760
open files (-n) 64
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 2048
cpu time (seconds, -t) unlimited
max user processes (-u) 160
virtual memory (kbytes, -v) 133120
You can set it with the file /etc/login.conf:
default:\
:path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/pkg/bin /usr/pkg/sbin /usr/local/bin:\
:umask=022:\
:datasize-max=3072M:\
:datasize-cur=1024M:\
:maxproc-max=1044:\
:maxproc-cur=512:\
:openfiles-cur=256:\
:stacksize-cur=8M:
Next time you start the sytem, all users belonging to the default login group will have the following limits:
$ ulimit -a
coredump(blocks) unlimited
data(KiB) 1048576
file(blocks) unlimited
lockedmem(KiB) 124528
memory(KiB) 373584
nofiles(descriptors) 256
processes 512
stack(KiB) 8192
time(cpu-seconds) unlimited
You may set different limits for different user, thus different services:
database:\
:ignorenologin:\
:datasize=infinity:\
:maxproc=infinity:\
:openfiles-cur=1024:\
:stacksize-cur=48M:
You should run this command after editing your login.conf:
$ cap_mkdb /etc/login.conf
You can assign the newly created login class to the desired user by doing something like this:
$ usermod -L database pgsql
Let's check our limits again with sysctl:
$ sysctl proc.curproc.rlimit.maxproc
proc.curproc.rlimit.maxproc.soft = 512
proc.curproc.rlimit.maxproc.hard = 1044
$ sysctl proc.curproc.rlimit.descriptors
proc.curproc.rlimit.descriptors.soft = 256
proc.curproc.rlimit.descriptors.hard = 3404
Much reasonable for a modern system.
System V interprocess communication
Shared memory and semaphores are part of the System V IPC. Using and fine tuning shared memory and semaphores can give you increased performance on your NetBSD server.
You can check it's settings with sysctl:
$ sysctl kern.ipc
kern.ipc.sysvmsg = 1
kern.ipc.sysvsem = 1
kern.ipc.sysvshm = 1
kern.ipc.shmmax = 8388608
kern.ipc.shmmni = 128
kern.ipc.shmseg = 128
kern.ipc.shmmaxpgs = 2048
kern.ipc.shm_use_phys = 0
kern.ipc.msgmni = 40
kern.ipc.msgseg = 2048
kern.ipc.semmni = 10
kern.ipc.semmns = 60
kern.ipc.semmnu = 30
As you can see, the default maximum size of shared memory segment (shmmax) is 8 megabytes by default, but for a postgresql server you will most likely need about 128 megabytes.
Note, that you cannot set shmmax directly with syctl, but you need to set the value in pages size with kern.ipc.shmmaxpgs.
The default PAGE_SIZE is 4096, so if you want to set it to 128M, you have to do:
grimnismal# sysctl -w kern.ipc.shmmaxpgs=32768
kern.ipc.shmmaxpgs: 4096 -> 32768
So the formula is: 128 * 1024 * 1024 / 4096 = 32768
You can make any sysctl change permanent by setting it in /etc/sysctl.conf
You can also get detailed information on System V interprocess communication (IPC) facilities on the system with the following command:
$ ipcs
IPC status from <running system> as of Mon Dec 3 18:52:00 2007
Message Queues:
T ID KEY MODE OWNER GROUP
Shared Memory:
T ID KEY MODE OWNER GROUP
m 65536 5432001 --rw------- pgsql pgsql
Semaphores:
T ID KEY MODE OWNER GROUP
s 65536 5432001 --rw------- pgsql pgsql
s 65537 5432002 --rw------- pgsql pgsql
s 65538 5432003 --rw------- pgsql pgsql
You can also force shared memory to stay in physical memory. This means that they will be never paged out to swap.
You may set this behaviour with the kern.ipc.shm_use_phys sysctl.
TCP Performance
Socket buffers
TCP uses what is called the “congestion window” to determine how many packets can be sent at one time. The larger the congestion window size, the higher the throughput. The maximum congestion window is related to the amount of buffer space that the kernel allocates for each socket.
So on high bandwidth line the bottleneck could be the buffer sizes.
Here's the formula for a network link's throughput:
Throughput = buffer size / latency
So if we reorganise it a bit, we get the formula of the ideal buffer size:
buffer size = 2 * delay * bandwidth
The delay is the network latency, which is most commonly known as "ping".
I think I don't have to introduce this tool:
$ ping yahoo.com
PING yahoo.com (66.94.234.13): 56 data bytes
64 bytes from 66.94.234.13: icmp_seq=0 ttl=50 time=195.596 ms
64 bytes from 66.94.234.13: icmp_seq=1 ttl=50 time=188.883 ms
64 bytes from 66.94.234.13: icmp_seq=2 ttl=51 time=192.023 ms
^C
----yahoo.com PING Statistics----
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 188.883/192.167/195.596/3.359 ms
However ping(1) will give you the round-trip of the network link -- which is the twice of delay -- so the final formula is the following:
buffer size = RTT * bandwidth
Fortunately, there is an automatic control for those buffers in NetBSD. It can be checked and and enabled with sysctl:
net.inet.tcp.recvbuf_auto = 0
net.inet.tcp.recvbuf_inc = 16384
net.inet.tcp.recvbuf_max = 262144
net.inet.tcp.sendbuf_auto = 0
net.inet.tcp.sendbuf_inc = 8192
net.inet.tcp.sendbuf_max = 262144
The default values for the maximal send and receive buffers are set to 256 KBytes, which is very tiny.
A reasonable value for newer systems would be 16 MBytes, so you may set it to that value after you turned it on with sysctl:
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
Just remember that your application has to avoid using SO_RCVBUF or SO_SNDBUF if it wants to take advantage of using automatically sized buffers.
Increase the initial window size
RFC 6928 permits the extension of the initial window size to 10 segments. By default NetBSD uses 4 segments as specified in the RFC 3390. You can increase it by using the following sysctl's:
net.inet.tcp.init_win=10
net.inet.tcp.init_win_local=10
IP queue
If you are seeing drops due to the limited IP queue (check the net.inet.ip.ifq.drops sysctl), you can increase that by using:
net.inet.ip.ifq.maxlen = 4096
Other settings
If you still are seeing low throughput, maybe it's time for desperate measures ! Try to change the congestion algorithm to cubic using:
net.inet.tcp.congctl.selected=cubic
Or try to decrease the limit (expressed in hz ticks) at which the system fires a delayed ACK (for an odd numbered packet). Usually one hz is 10ms but you may want to double check using the kern.clockrate sysctl, and dividing one second to the value there. So, to decrease delack_ticks to 50ms use:
net.inet.tcp.delack_ticks=5
Disk I/O
You may enable additional buffer queue strategies for better responsiveness under high disk I/O load.
Enable them with the following lines in your kernel configuration file:
options BUFQ_READPRIO
options BUFQ_PRIOCSCAN
Using optimized FLAGS with GCC
NOTE: Trying to utilise heavy optimalisations can make your system hard to debug, cause unpredictable behaviour or kill your pet. Especially use of -mtune is highly discouraged, because it does not improve performance considerably or at all compared to -march=i686, and gcc4 can't handle it correctly at least on athlon CPUs.
You can put something like this into your mk.conf, when you compile your packages and your system.
CPUFLAGS+=-march=i686
COPTS+=-O2
FIXME: This is only for building world
CFLAGS+="-O2 -march=i686"
FIXME: For packages
For more detailed information about the possible CFLAG values, please read the GNU C Compiler documentation gcc(1).
References
See also
Contents
Startup & Shutdown
Shutdown the machine immediately and reboot
shutdown -r now
Shutdown the machine and immediately power off (for ATX hardware, you will need to install and enable the apm(8) support in kernel: see ?How to poweroff at shutdown for details)
shutdown -p now
Shutdown the machine and halt afterwards (but keep the power on)
shutdown -h now
File & Directory Operations
Find out what directory you are in
pwd
List contents of current working directory
ls
Copy a file to a new file with a different name
cp foobar.txt barfoo.txt
Remove a file (delete it)
rm foobar.txt
Remove an ""empty"" directory.
rmdir mydir
Recursively remove a directory (be careful with this, if you mess up, you can easily wipe out your whole system). This will remove the directory and all the files and subdirectories it contains, provided you have permissions.
rm -rf mydir
Using Archives
Create a ""tar"" archive with the contents of a directory displaying each file and directory as tar runs verbosely.
tar cvf my_archive.tar some_directory
The same using pax:
pax -w -v -f my_archive.tar some_directory
Extract a ""tar"" archive to the current directory displaying each file and directory as tar runs verbosely.
tar xvf my_archive.tar
The same using pax:
pax -r -v -f my_archive.tar
Compress a file with gzip using maximum (slowest) compression
gzip -9 foobar.txt
Compress a file with bzip2 using maximum (slowest) compression. Bzip generally compresses files better at the cost of greater time during the compression phase. The archives it creates generally de-compress at the same speed as gzip.
bzip2 -9 foobar.txt
User Operations
Add a new user and create their home directory
useradd -m jsmith
Change a user's default shell
chsh -s /path/newshell jsmith
or
chpass -s /bin/ksh jsmith
Package Operations (pkgsrc)
Add a binary package
pkg_add my_binary_package.tgz
List all (installed) packages
pkg_info
List all files that are part of a directory
pkg_info -L the_package_name
Delete a package
pkg_delete the_package_name
Create a binary package file from an already installed package (this requires the pkg_tarup tool from pkgsrc which lives under the pkgtools subdirectory.)
pkg_tarup the_package_name
Advanced Command Line Recipes
Create a binary package for every single installed package on the system and place them all in the current directory.
for pkg in
pkg_info -e "*" | sort
; do echo "Packaging $pkg"; pkg_tarup -d . $pkg 1>/dev/null; done
Contents
Introduction
The purpose of this document is to introduce the programmer to the methodology of kqueue, rather than providing a full and exhaustive documentation of its capabilities.
Kqueue provides a standard API for applications to register their interest in various events/conditions and have the notifications for these delivered in an efficient way. It was designed to be scalable, flexible, reliable and correct.
kqueue API
kevent data structure
The kevent structure goes like this:
struct kevent {
uintptr_t ident; /* identifier for this event */
uint32_t filter; /* filter for event */
uint32_t flags; /* action flags for kqueue */
uint32_t fflags; /* filter flag value */
int64_t data; /* filter data value */
void *udata; /* opaque user data identifier */
};
pair
A kevent is identified by an pair. The ident
might be a descriptor (file, socket, stream), a process ID or a signal number, depending on what we want to monitor. The filter
identifies the kernel filter used to process the respective event. There are some pre-defined system filters, such as EVFILT_READ or EVFILT_WRITE, that are triggered when data exists for read or write operation is possible respectively.
If for instance we want to be notified when there's data available for reading in a socket, we have to specify a kevent in the form <sckfd, EVFILT_READ>
, where sckfd is the file descriptor associated with the socket. If we would like to monitor the activity of a process, we would need a <pid, EVFILT_PROC>
tuple. Keep in mind there can be only one kevent with the same in our kqueue.
flags
After having designed a kevent, we should decide whether we want to have it added to our kqueue. For this purpose we set the flags
member to EV_ADD
. We could also delete an existing one by setting EV_DELETE
or just disable it with EV_DISABLE
.
Combinations may be made by OR'ing the desired values. For instance, EV_ADD | EV_ENABLE | EV_ONESHOT
would translate to "Add the event, enable it and return only the first occurrence of the filter being triggered. After the user retrieves the event from the kqueue, delete it."
Reversely, if we would like to check whether a flag is set in a kevent, we would do it by AND'ing with the respective value. For instance:
if (myevent.flags & EV_ERROR) {
/* handle errors */
}
EV_SET() macro
The EV_SET() macro is provided for ease of initializing a kevent structure. For the time being we won't elaborate on the rest of the kevent members; instead let's have a look at the case when we need to monitor a socket for any pending data for reading:
kevent ev;
EV_SET(&ev, sckfd, EVFILT_READ, EV_ADD, 0, 0, 0);
If we liked to monitor a set of N sockets we would write something like this:
kevent ev[N];
int i;
for (i = 0; i < N; i++)
EV_SET(&ev[i], sckfd[i], EVFILT_READ, EV_ADD, 0, 0, 0);
kqueue(2)
The kqueue holds all the events we are interested in. Therefore, to begin with, we must create a new kqueue. We do so with the following code:
int kq;
if ((kq = kqueue()) == -1) {
perror("kqueue");
exit(EXIT_FAILURE);
}
kevent(2)
At this point the kqueue is empty. In order to populate it with a set of events, we use the kevent(2) function. This system call takes the array of events we constructed before and does not return until at least one event is received (or when an associated timeout is exhausted). The function returns the number of changes received and stores information about them in another array of struct kevent elements.
kevent chlist[N]; /* events we want to monitor */
kevent evlist[N]; /* events that were triggered */
int nev, i;
/* populate chlist with the events we are interested in */
/* ... */
/* loop forever */
for (;;) {
nev = kevent(kq, chlist, N,
evlist, N,
NULL); /* block indefinitely */
if (nev == -1) {
perror("kevent()");
exit(EXIT_FAILURE);
}
else if (nev > 0) {
for (i = 0; i < nev; i++) {
/* handle events */
}
}
}
timeout
Sometimes it is useful to set an upper time limit for kevent() to block. That way, it will return, no matter if none of the events was triggered. For this purpose we need the timespec
structure, which is defined in sys/time.h
:
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* and nanoseconds */
};
The above code would turn into the following:
kevent chlist[N]; /* events we want to monitor */
kevent evlist[N]; /* events that were triggered */
struct timespec tmout = { 5, /* block for 5 seconds at most */
0 }; /* nanoseconds */
int nev, i;
/* populate chlist with the events we are interested in */
/* ... */
/* loop forever */
for (;;) {
nev = kevent(kq, chlist, N,
evlist, N,
&tmout); /* set upper time limit to block */
if (nev == -1) {
perror("kevent()");
exit(EXIT_FAILURE);
}
else if (nev == 0) {
/* handle timeout */
}
else if (nev > 0) {
for (i = 0; i < nev; i++) {
/* handle events */
}
}
}
Note that if one uses a non-NULL zero timespec structure, the kevent() will return instantaneously, bringing down the performance to the levels of a plain poll method.
Examples
A timer example
The following code will setup a timer that will trigger a kevent every 5 seconds. Once it does, the process will fork and the child will execute the date(1) command.
#include <sys/event.h>
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h> /* for strerror() */
#include <unistd.h>
/* function prototypes */
void diep(const char *s);
int main(void)
{
struct kevent change; /* event we want to monitor */
struct kevent event; /* event that was triggered */
pid_t pid;
int kq, nev;
/* create a new kernel event queue */
if ((kq = kqueue()) == -1)
diep("kqueue()");
/* initalise kevent structure */
EV_SET(&change, 1, EVFILT_TIMER, EV_ADD | EV_ENABLE, 0, 5000, 0);
/* loop forever */
for (;;) {
nev = kevent(kq, &change, 1, &event, 1, NULL);
if (nev < 0)
diep("kevent()");
else if (nev > 0) {
if (event.flags & EV_ERROR) { /* report any error */
fprintf(stderr, "EV_ERROR: %s\n", strerror(event.data));
exit(EXIT_FAILURE);
}
if ((pid = fork()) < 0) /* fork error */
diep("fork()");
else if (pid == 0) /* child */
if (execlp("date", "date", (char *)0) < 0)
diep("execlp()");
}
}
close(kq);
return EXIT_SUCCESS;
}
void diep(const char *s)
{
perror(s);
exit(EXIT_FAILURE);
}
Compile and run:
$ gcc -o ktimer ktimer.c -Wall -W -Wextra -ansi -pedantic
$ ./ktimer
Tue Mar 20 15:48:16 EET 2007
Tue Mar 20 15:48:21 EET 2007
Tue Mar 20 15:48:26 EET 2007
Tue Mar 20 15:48:31 EET 2007
^C
A raw tcp client
We will implement a raw tcp client using the kqueue framework. Whenever the host sends data to the socket, we will print them in the standard output stream. Similarly, when the user types something in the standard input stream, we will send it to the host through the socket. Basically, we need to monitor the following:
- any incoming host data in the socket
any user data in the standard input stream
include
include
include
include
include
include
include
include
include
define BUFSIZE 1024
/ function prototypes / void diep(const char s); int tcpopen(const char host, int port); void sendbuftosck(int sckfd, const char *buf, int len);
int main(int argc, char argv[]) { struct kevent chlist[2]; / events we want to monitor / struct kevent evlist[2]; / events that were triggered */ char buf[BUFSIZE]; int sckfd, kq, nev, i;
/ check argument count / if (argc != 3) { fprintf(stderr, "usage: %s host port\n", argv[0]); exit(EXIT_FAILURE); }
/ open a connection to a host:port pair / sckfd = tcpopen(argv[1], atoi(argv[2]));
/ create a new kernel event queue / if ((kq = kqueue()) == -1) diep("kqueue()");
/ initialise kevent structures / EV_SET(&chlist[0], sckfd, EVFILT_READ, EV_ADD | EV_ENABLE, 0, 0, 0); EV_SET(&chlist[1], fileno(stdin), EVFILT_READ, EV_ADD | EV_ENABLE, 0, 0, 0);
/ loop forever / for (;;) { nev = kevent(kq, chlist, 2, evlist, 2, NULL);
if (nev < 0) diep("kevent()"); else if (nev > 0) { if (evlist[0].flags & EV_EOF) /* read direction of socket has shutdown */ exit(EXIT_FAILURE); for (i = 0; i < nev; i++) { if (evlist[i].flags & EV_ERROR) { /* report errors */ fprintf(stderr, "EV_ERROR: %s\n", strerror(evlist[i].data)); exit(EXIT_FAILURE); } if (evlist[i].ident == sckfd) { /* we have data from the host */ memset(buf, 0, BUFSIZE); if (read(sckfd, buf, BUFSIZE) < 0) diep("read()"); fputs(buf, stdout); } else if (evlist[i].ident == fileno(stdin)) { /* we have data from stdin */ memset(buf, 0, BUFSIZE); fgets(buf, BUFSIZE, stdin); sendbuftosck(sckfd, buf, strlen(buf)); } } }
}
close(kq); return EXIT_SUCCESS; }
void diep(const char *s) { perror(s); exit(EXIT_FAILURE); }
int tcpopen(const char host, int port) { struct sockaddr_in server; struct hostent hp; int sckfd;
if ((hp = gethostbyname(host)) == NULL) diep("gethostbyname()");
if ((sckfd = socket(PF_INET, SOCK_STREAM, 0)) < 0) diep("socket()");
server.sin_family = AF_INET; server.sin_port = htons(port); server.sin_addr = ((struct in_addr )hp->h_addr); memset(&(server.sin_zero), 0, 8);
if (connect(sckfd, (struct sockaddr *)&server, sizeof(struct sockaddr)) < 0) diep("connect()");
return sckfd; }
void sendbuftosck(int sckfd, const char *buf, int len) { int bytessent, pos;
pos = 0; do { if ((bytessent = send(sckfd, buf + pos, len - pos, 0)) < 0) diep("send()"); pos += bytessent; } while (bytessent > 0); }
Compile and run:
$ gcc -o kclient kclient.c -Wall -W -Wextra -ansi -pedantic
$ ./kclient irc.freenode.net 7000
NOTICE AUTH :*** Looking up your hostname...
NOTICE AUTH :*** Found your hostname, welcome back
NOTICE AUTH :*** Checking ident
NOTICE AUTH :*** No identd (auth) response
_USER guest tolmoon tolsun :Ronnie Reagan
NICK Wiz_
:herbert.freenode.net 001 Wiz :Welcome to the freenode IRC Network Wiz
^C
(Whatever is in italics it is what we type.)
More examples
More kqueue examples (including the aforementioned) may be found here.
Documentation
- kqueue(2): kqueue, kevent NetBSD Manual Pages
- Kqueue: A generic and scalable event notification facility (pdf)
- kqueue slides
- The Julipedia: An example of kqueue