Code bounties
Anyone can add more projects or rewards but please don't add my name near your rewards
Port NetBSD to Toshiba AC100 series
- kefren - $100
Make usr.sbin/ldpd IPv6 compatible - draft manral + support ipv6 AF in all messages
- kefren - $50
Port FreeBSD's bwn(4)
- kefren - $100
This article is going to be a collection of random notes which I have found during my development in kernel. I have found that there are some hints which every developer knows, but there is no documentation where newbie can learn them.
Finding where the bug is
When you get a crash in the kernel you want to translate the address from the backtrace to the line in the source code:
Stopped in pid 496.1 (gdb) at netbsd:breakpoint+0x5: leave
First, you need to find the address of the breakpoint function in the running kernel image with the nm(1) command:
nm netbsd | grep breakpoint
Then add 0x5
to the address, and use addr2line(1) to get the exact line in the kernel source code where you get the crash:
addr2line -e netbsd {sum address}
In gdb(1), this can be achieved with the command info line *(function_name)+0x5
.
What to do if ddb backtrace doesn't work
The DDB backtrace command usually doesn't work when the EIP register was set to NULL, e.g. via a bad function pointer. In this case we can get part of the backtrace by using a different approach.
db> show all reg
eip 0 cs 0 eflags 0 esp 0xcb741b70
We need to find which address was set in the ESP register (this is the stack pointer register on i386). When we have our address we need to use
x /Lx 0xcb741b70,20
to print the first 20 addresses from the stack. To easily find the address of the last function you need to look for an address with 0xc0
at the start.
The command x /I c06428fc
will then translate the function address to it's name with the symbol table lookup.
What to do if gdb cannot backtrace through trap()
Use source .../sys/arch/i386/gdbscripts/stack
gdb script and run stack
. See also PR 10313.
How to rebuild /boot
(This example assumes you are running NetBSD-i386)
* Make sure you have the tools built
* sys/arch/i386/stand/boot and enter $TOOLDIR/bin/nbmake-i386
Developer Key Signing
Developers need to generate, maintain, and sign keys to keep a web of trust. The following are shortcut commands to accomplish this.
Many of the commands will have various prompts that should be obvious (selecting keys out of a list, entering passphrases, etc). That verbage has been eliminated from the examples.
- Key Generation
- Extend Expiration
- Key Uploading
- Key Download
- Key Signing
- Signature Upload
Key Generation
TBD
Extending Expiration
Your key will eventually expire. You can extend the key expiration time:
netpgp:
unsupported at this time.
gpg:
# gpg --edit-key C631C69E Command> expire Key is valid for? (0) 5y
You will need to re-upload to the key-server.
Key Uploading
gpg:
# gpg --keyserver pgp.mit.edu --send-keys C631C69E
Key Download
If you have the fingerprint, it's pretty easy to download the key. This will import into your keychain.
netpgp: (Only if already downloaded from keyserver.)
# netpgpkeys --import-key file
gpg:
# gpg --keyserver pgp.mit.edu --search-keys C631C69E
Key Signing
gpg:
# gpg --default-key cyber@netbsd.org --sign-key C631C69E
Signature Upload
gpg:
# gpg --keyserver pgp.mit.edu --send-keys E361D0FA
Notes on Desktop Project
Some links on Desktop Project
- http://mail-index.netbsd.org/netbsd-desktop
- http://www.wired.com/news/technology/0,70037-0.html - something to muse on
- http://wiki.netbsd.org/projects/code-in/ - parts of general plan
- remote - remote desktop software
Wiki page with project ideas vanished. Someone has to dig it out.
Opinions
I've discussed the state of Desktop NetBSD Project (DNP) with various developers on IRC and in mail, and I've received different opinions on how developers view it.
I shall not discuss problems arising from lack of hardware drivers, most notably network interfaces, wireless and "wireful," and graphical adapters. I'm concentrating on more general questions here.
Binary packages
One of perceived problems (mbalmer) is that we don't have any toolkit for graphical user interfaces in base, and thus we don't have any chance to write anything with GUI. It is opined that the lack of applications readily available (that use this toolkit) is less important; since there's no toolkit, no applications are available.
Another perceived problem is lack of binary package updates in pkgsrc. I still don't understand what exactly is the problem here, and nobody cared to provide elaborate explanation what it is. We have several different ways to manage software installations using binary packages. Besides using pkgsrc in a way to reuse binary packages ("bin-install" in DEPENDS_TARGET), there exist pkg_chk with support for binary updates, and there exists pkg_rolling-replace, which, I think, can be set up to reuse binary packages as well.
My perception of this "binary packages problem" is that it is imaginary. I've heard some loud praises of pkgin, but I haven't heard more than several voices. Thus I'd rather attribute this problem either to the lack of experience, or to the lack of documentation, or very scarce publicity rather than lack of support. I don't deny though, that there exist real problems which may prevent users from using pkgsrc effectively.
X11
Tobias Nygren (tnn) suggested idea that removing X.org from base can free human resources and help development of more coherent system.
Indeed, moving base X11 version into pkgsrc has brings at least one major benefit: it is much easier to update a package than part of base system. Also, pkgsrc has much shorter release cycle, a quarter rather than two or three years. This means that developers can spend their time more effectively, they can save time otherwise spent in adaptation of new packages to older X.org libraries, drivers, or applications as found in older NetBSD releases.
It was argued (joerg) that there're very few sensible reasons to continue development of base X.org, one of them is cross-compilation, another one is ease of development. pkgsrc provides some cross-compilation support for quite a long time; there exist documents describing how to utilise it, and one of them addresses cross-compilation of (modular) X.org specifically. Thus the only reason remains: ease of development.
I've heard two different opinions related to the ease of development. David Holland pointed out that we need topic-oriented patches in pkgsrc; this needs pkgsrc tools with functionality similar like quilt. Tobias Nygren expressed more radical view, that convenience of two or three developers shouldn't hold the whole project.
It should be possible to help the transition by using support for CVS-based packages from pkgsrc-WIP. In my opinion, this could be used to help X.org hackers working with CVS X.org version (xsrc module) during development cycle. NetBSD could distribute its own X.org version for some time, which could co-exist with pkgsrc's version for some time. This idea met rather strong opposition, but I don't really insist on performing transition exactly this way.
Applications
Priorities
It would be nice to have a list of important packages.
While sometimes it may be hard to come to consensus, there exist packages which are unique (Firefox, OpenOffice) or where there're few important alternatives. A (prioritized) list of them would be nice to have.
An approximation of it could be a list of packages most used by users.
Each quarter we ask users to provide information on installed packages:
"We'd also really appreciate it if people would install the pkgsrc/pkgtools/pkgsurvey package, and then run the pkgsurvey script for us. This will forward us a list of the packages installed on that machine, and the operating system and release level of the operating system. The results will be kept confidential, but the output will help us analyse the packages that are most used."
It is not clear
- why the information is kept secret;
- if there's enough statistics being gathered;
- if this information is used at all.
Perhaps we should publish it or start publishing it in future.
Release cycle
We need someone running pkgsrc bulk builds from current tree before freeze.
We don't even see build problems before first bulk build results, which appear closer to planned end of freeze. Sure, knowing of problem existance doesn't automatically entail quick fix. But we don't even know that the problem is there at the first place. (E.g. in 2010Q4 freeze the problem with renderproto package was discovered 3 days before the freeze ended.)
We need pkgsrc bulk builds with modular X.org.
In many cases base X.org is too old to provide necessary hardware support, significant number of users are forced to use pkgsrc X.org.
Organisation
It is obvious from above, that many problems need organised effort to be solved. Some of them are rather large to be worked on singlehandedly, others require cooperation of some other developers or even users.
It isn't clear if we can get X.org out of base in realistic future, since it requires cooperation of unnamed X.org hackers and, perhaps and most possibly, some other developers.
It isn't clear if we can get realistic picture of pkgsrc usage at all, since it requires cooperation of users, at the very least.
It isn't yet clear if we can get realistic description of use cases of binary packages let alone improve anything in this area. This requires rather long period of maintaining different systems in different ways and by different people.
What is clear, in my opinion, is that we have organisational problems and very passive community. There's very strong faction of developers and users who want Unix as it was decades ago.
Unsorted/unprocessed
- lack of interactivity support in pkgsrc tools
- NetBSD-specific problems in X.org (possibly connected to 64-bit time_t): touchscreen looses ButtonRelease events, problems with X_GetImage (in Xnest and other applications, e.g. FriCAS)
- touchscreen calibration support
- X server which doesn't need configuration file
- "Distribuition" based on NetBSD? *
X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 73 (X_GetImage)
Contents
I decided to bought new computer and setup a continuous building service on it to always have fresh built binaries ready for usage and to be able to test my changes more easily. As continuous integration service I chose buildbot. Buildbot as package was located in pkgsrc wip and I have updated it to it's latest version 0.8.2 and introduced new package for building buildbot slave program.
NetBSD setup
I have created new Logical Volume and mounted it to /usr/devel directory. After that I have created new user called buildbot.
As root:
useradd -m buildbot # Add buildbot user
# Create Buildbot directories
mkdir /usr/devel/buildbot
mkdir /usr/devel/buildslave
chown buildbot /usr/devel/buildbot /usr/devel/buildslave
# Install Buildbot daemon and Buildbot slave
cd /usr/pkgsrc/wip/buildbot
make install
cd /usr/pkgsrc/wip/buildslave
make install
For My build setup it was needed to change /etc/login.conf and add builbot user to build group
builder:\
:datasize-cur=1024M:\
:datasize-max=infinity:\
:maxproc-max=1000:\
:maxproc-cur=1000:\
:openfiles-cur=1024:
usermod -L builder buildbot
Buildbot setup
Buildmaster Setup
First we need to initialise new buildmaster directory
As buildbot user:
cd /usr/devel
buildbot create-master /usr/devel/buildmaster
I have decided that for builds I would like to run maximum 2 parallel builds on one build slave client and I want to have system build once a day. After setup I used this configuration file to get buildmaster working.
# -*- python -*-
# ex: set syntax=python:
c = BuildmasterConfig = {}
from buildbot.buildslave import BuildSlave
c['slaves'] = [BuildSlave("bot1name", "bot1pass", max_builds=2)]
c['slavePortnum'] = 9989
from buildbot.changes.pb import PBChangeSource
c['change_source'] = PBChangeSource()
from buildbot.scheduler import Scheduler
c['schedulers'] = []
from buildbot.schedulers import timed
s = timed.Nightly(name='daily',
builderNames=["buildbot-netbsd-vanilla-i386", "buildbot-netbsd-vanilla-amd64"],
hour=13,
minute=0)
c['schedulers'] = [s]
cvsroot = ":pserver:anoncvs@anoncvs.netbsd.org:/cvsroot"
cvsmodule = "src"
from buildbot.process import factory
from buildbot.steps.source import CVS
from buildbot.steps.shell import Compile
from buildbot.steps.shell import ShellCommand
from buildbot.steps.shell import Test
from buildbot.steps.python_twisted import Trial
f1 = factory.BuildFactory()
f1.addStep(CVS(cvsroot=cvsroot, cvsmodule=cvsmodule, login="", mode="update"))
f1.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/clean.sh","i386"]))
f1.addStep(Compile(command=["/usr/devel/buildbot/bin/build.sh","i386"],
warningPattern="^WWarning: "))
f1.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/test.sh","i386"]))
f2 = factory.BuildFactory()
f2.addStep(CVS(cvsroot=cvsroot, cvsmodule=cvsmodule, login="", mode="update"))
f2.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/clean.sh","amd64"]))
f2.addStep(Compile(command=["/usr/devel/buildbot/bin/build.sh","amd64"],
warningPattern="^WWarning: "))
f2.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/test.sh","amd64"]))
f3 = factory.BuildFactory()
f3.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/getdev_src.sh", "dev-i386"]))
f3.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/clean.sh","dev-i386"]))
f3.addStep(Compile(command=["/usr/devel/buildbot/bin/build.sh","dev-i386"],
warningPattern="^WWarning: "))
f3.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/test.sh","dev-i386"]))
f4 = factory.BuildFactory()
f4.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/getdev_src.sh", "dev-amd64"]))
f4.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/clean.sh","dev-amd64"]))
f4.addStep(Compile(command=["/usr/devel/buildbot/bin/build.sh","dev-amd64"],
warningPattern="^WWarning: "))
f4.addStep(ShellCommand(command=["/usr/devel/buildbot/bin/test.sh","dev-amd64"]))
b1 = {'name': "buildbot-netbsd-vanilla-i386",
'slavename': "bot1name",
'builddir': "full",
'factory': f1,
}
b2 = {'name': "buildbot-netbsd-vanilla-amd64",
'slavename': "bot1name",
'builddir': "full-amd64",
'factory': f2,
}
b3 = {'name': "buildbot-netbsd-development-tree-i386",
'slavename': "bot1name",
'builddir': "dev-i386",
'factory': f3,
}
b4 = {'name': "buildbot-netbsd-development-tree-amd64",
'slavename': "bot1name",
'builddir': "dev-amd64",
'factory': f4,
}
c['builders'] = [b1, b2, b3, b4]
c['status'] = []
from buildbot.status import html
from buildbot.status.web.authz import Authz
authz = Authz(
forceBuild=True,
forceAllBuilds=True,
stopBuild=True,
stopAllBuilds=True,
cancelPendingBuild=True)
c['status'].append(html.WebStatus(http_port=8010, authz=authz))
c['projectName'] = "NetBSD development daily builds"
c['projectURL'] = "www.netbsd.org/~haad/builds/"
c['buildbotURL'] = "http://musasi.haad.chillisys.com:8010/"
For easier builds and testing I have added 3 scripts to buildbot-basedir/bin/
- clean.sh
- build.sh
- test.sh
From master.cfg file you can see that all of these scripts are called with one argument which is basically {prefix}-{arch} where
- {prefix} -> prefix name for different builds for same arch (developmant tree, vanilla tree, some netbsd branch etc.)
- {arch} -> architecture name used for build
I have implemented one version of script for every architecture built by my buildbot server.
Clean script is used to clean build dir on slave before build starts.
#!/bin/sh
dirname=$1
# get arch from dirname
arch=$(echo $dirname | cut -f 2 -d\-);
buildslave_dir="/usr/devel/buildslave/"
obj_dir="${buildslave_dir}/obj/${dirname}"
echo "Removing build artefacts from ${obj_dir}/*"
rm -rf ${obj_dir}/*
Build builds whole system
#!/bin/sh
dirname=$1
j_flag=9
arch=$(echo $dirname | cut -f 2 -d\-);
buildslave_dir="/usr/devel/buildslave/"
obj_dir="${buildslave_dir}/obj/${dirname}"
echo "Building NetBSD tree for architecture ${arch} in $(pwd), objdir ${obj_dir}"
./build.sh -V GMAKE_J_ARGS="-j ${j_flag}" -O ${obj_dir} -T ${obj_dir}/tooldir -Uu -m ${arch} -j ${j_flag} release
Test will later run anita and run regression test suite from it
#!/bin/sh
dirname=$1
arch=$(echo $dirname | cut -f 2 -d\-);
buildslave_dir="/usr/devel/buildslave/"
obj_dir="${buildslave_dir}/obj/${dirname}"
anita_dir="${buildslave_dir}/anita/${dirname}"
release_dir="${obj_dir}/releasedir/${arch}/"
echo "Running tests on ${arch}"
anita --workdir ${anita_dir} test ${release_dir}
Buildslave setup
Buildslave is actual buildbot cluster worker which does worked scheduled to him by buildbot. Command used to created buildslave contains buildir [buildbot hostname:port] buildslave name password. Name and password must match those in master.cfg file.
As buildbot user:
buildslave create-slave /usr/devel/buildslave localhost:9989 buildslave buildslavepass