Posted Saturday afternoon, July 3rd, 2010
Posted late Saturday evening, July 3rd, 2010



Otherwise known as "Cryo". @Cryo on Twitter Cryo on Cryo+ on

==Experience of getting NetBSD-7 running on eeepc900==

VERY IMPORTANT *Can not configure network with WIFI with sysinst during install. Configure network will not use wpa_supplicant or wiconfig and does not differentiate between ethernet in WIFI in displaying interfaces. This gives a false impression that if you configure ath0 that it will ask for SSID, Encryption Type, and Password and generate /etc/wpa_supplicant.conf as well as add the necessary parts to rc.conf. The fact that wpa_supplicant can't accept this information from the command line and requires a conf file may be part of the problem.

IMPORTANT *DRMKMS kernel will choose other display, I guess, because the LCD will go black or weird fade to black.

NOT IMPORTANT *SDCard class 10, seems rather slow on the USB bus, like slower than on other OS I have run on it... is this not enough wakeups on the USB driver?

Posted at lunch time on Sunday, July 4th, 2010

Quickstart with LDP daemon

Step 1: Boot an MPLS enabled kernel - see MPLS
Step 2: Run /usr/sbin/ldpd
Step 3: Add some routes to neighbors bindings and see routes being automatically tagged or just use zebra/quagga in order to do avoid manual routes
Step 4: You can look over some reports on control interface: telnet localhost 2626 - use root password - and use question mark for help

/etc/ldpd.conf example

# a default ldpd.conf
#       hello-time 8;
        max-label 400;
        min-label 200;
        command-port 2424;
#       ldp-id;

# USE TCP SIGNATURE FOR THIS NEIGHBOUR - don't forget to add entries in ipsec.conf && echo ipsec=yes >> /etc/rc.conf
# add -4 tcp 0x1000 -A tcp-md5 "mypassword" ;
# add -4 tcp 0x1000 -A tcp-md5 "mypassword" ;

        neighbour {
                authenticate yes;
Posted late Sunday evening, July 4th, 2010

Some quick info about MPLS:

You need to compile your kernel with options MPLS and psuedo-device ifmpls For pure LSR - that only switch labels without encap/decap from other protocols (e.g. INET) - you need to work on routing table, ONLY. For example:

# route add -mpls 41 -tag 25 -inet
add host 41: gateway
# route add -mpls 42 -tag 30 -inet
add host 42: gateway
# route add -mpls 51 -tag 25 -inet
add host 51: gateway

Translation of first line: if it receives an MPLS frame with label 41 forward it to INET next-hop, but switch (change) label 41 with label 25

You also need to tweak sysctl to accept and forward MPLS:

# sysctl -w net.mpls.accept=1
net.mpls.accept: 0 -> 1
# sysctl -w net.mpls.forwarding=1
net.mpls.forwarding: 0 -> 1

Verify routes with route get or better with netstat -nrT:

Destination        Gateway            Flags    Refs      Use    Mtu     Tag Interface
41             UGHS        0    37241      -      25 sk0
42             UGHS        0        0      -      30 sk0
51             UGHS        0        0      -      25 sk0

Interacting with other protocols

If you want to also decapsulate/encapsulate from MPLS to some other protocol (like INET or INET6), then you have to create an mpls interface and put in up state.

# ifconfig mpls0 create up

After that, create routes using ifa flag in order to specify the source interface (used for source IP Address of host generated packets), but route them through mpls0 interface.

# route add -ifa -ifp mpls0 -tag 25 -inet
add net gateway

Verify the route:

# route -n get
   route to:
        Tag: 25
 local addr:
  interface: mpls0
 recvpipe  sendpipe  ssthresh  rtt,msec    rttvar  hopcount      mtu     expire
       0         0         0       813       344         0         0         0 

or with netstat -rT:

204.152.190/24      UGS         0    95362      -      25 mpls0

Test if it's working using traceroute -M. Notice first hop is reporting label 25.

# traceroute -M
traceroute to (, 64 hops max, 40 byte packets
 1 (  2.892 ms  1.957 ms  1.992 ms [MPLS: Label 25 Exp 0]
 2 (  1.988 ms  1.961 ms  1.989 ms [MPLS: Label 27 Exp 0]
 3 (  1.990 ms  1.974 ms  2.009 ms
 4 (  2.651 ms  2.280 ms  3.663 ms
 5 (  33.944 ms  34.011 ms  33.869 ms [MPLS: Label 21669 Exp 0]
 6 (  33.946 ms  33.689 ms  33.929 ms
 7 (  35.930 ms  35.926 ms  35.917 ms
 8 (  43.940 ms  45.900 ms  47.916 ms
 9 (  59.901 ms  51.888 ms  51.913 ms
10 (  119.808 ms  119.780 ms  119.800 ms
11 (  141.755 ms  143.748 ms  149.756 ms
12 (  143.756 ms  143.757 ms  141.755 ms
13 (  145.831 ms  141.747 ms  143.762 ms
14 (  201.653 ms  205.650 ms  201.650 ms [MPLS: Label 16005 Exp 0]
15 (  201.663 ms  201.645 ms  201.664 ms
16 (  199.676 ms  201.652 ms  201.673 ms

See also LDP wiki page.

Posted late Sunday evening, July 4th, 2010

PAE and Xen balloon benchmarks


Three tests were performed to benchmark the kernel:

  1. runs. The results are those returned by time(1).
  2. hackbench, a popular tool used by Linux to benchmarks thread/process creation time.
  3. sysbench, which can benchmark mulitple aspect of a system. Presently, the memory bandwidth, thread creation, and OLTP (online transaction processing) tests were used.

All were done three times, with a reboot between each of these tests.

The machine used:

# cpuctl list                                                      
Num  HwId Unbound LWPs Interrupts     Last change
---- ---- ------------ -------------- ----------------------------
0    0    online       intr           Sun Jul 11 00:25:31 2010
1    1    online       intr           Sun Jul 11 00:25:31 2010
# cpuctl identify 0                                                
cpu0: Intel Pentium 4 (686-class), 2798.78 MHz, id 0xf29
cpu0: features 0xbfebfbff
cpu0: features 0xbfebfbff
cpu0: features 0xbfebfbff
cpu0: features2 0x4400
cpu0: "Intel(R) Pentium(R) 4 CPU 2.80GHz"
cpu0: I-cache 12K uOp cache 8-way, D-cache 8KB 64B/line 4-way
cpu0: L2 cache 512KB 64B/line 8-way
cpu0: ITLB 4K/4M: 64 entries
cpu0: DTLB 4K/4M: 64 entries
cpu0: Initial APIC ID 0
cpu0: Cluster/Package ID 0
cpu0: SMT ID 0
cpu0: family 0f model 02 extfamily 00 extmodel 00
# cpuctl identify 1 
cpu1: Intel Pentium 4 (686-class), 2798.78 MHz, id 0xf29
cpu1: features 0xbfebfbff
cpu1: features 0xbfebfbff
cpu1: features 0xbfebfbff
cpu1: features2 0x4400
cpu1: "Intel(R) Pentium(R) 4 CPU 2.80GHz"
cpu1: I-cache 12K uOp cache 8-way, D-cache 8KB 64B/line 4-way
cpu1: L2 cache 512KB 64B/line 8-way
cpu1: ITLB 4K/4M: 64 entries
cpu1: DTLB 4K/4M: 64 entries
cpu1: Initial APIC ID 0
cpu1: Cluster/Package ID 0
cpu1: SMT ID 0
cpu1: family 0f model 02 extfamily 00 extmodel 00

This machine uses HT - so technically speaking, it is not a true bi-CPU host.


build-pae.png hackbench-pae.png sysbench-pae.png

Overall, PAE affects memory performance by a 15-20% ratio; this is particularly noticeable with sysbench and hackbench, where bandwidth and thread/process creation time are all slower.

Userland remains rather unaffected, with differences in the 5% range; -j4 runs approximately 5% slower under PAE, both for native and Xen case.

Do not be surprised by the important "user" result for benchmark in the native vs Xen case. Build being performed with -j4 (4 make sub-jobs in parallel), many processes may run concurrently under i386 native, crediting more time for userland, while under Xen, the kernel is not SMP capable.

When comparing Xen with a native kernel with all CPU turned offline except one, we observe an overhead of 15 to 20%, that mostly impacts performance at "sys" (kernel) level, which directly affects the total time of a full -j4 release. Contrary to original belief, Xen does add overhead. One exception being the memory bandwidth benchmark, where Xen (PAE and non-PAE) outperforms the native kernels in an UP context.

Notice that, in a MP context, the total build time between the full-MP system and the one with just one CPU running sees an improvement by approximately 15%, with "sys" nearly doubling its time credit when both CPUs are running. As the src/ directory remained the same between the two tests, we can assume that the kernel was concurrently solicited twice as much in the bi-CPU than in the mono-CPU case.

Xen ballooning

build-balloon.png hackbench-balloon.png sysbench-balloon.png

In essence, there is not much to say. Results are all below the 5% margin, adding the balloon thread did not affect performance or process creation/scheduling drastically. It is all noise. The timeout delay added by cherry@ seems to be reasonable (can be revisited later, but does not seem to be critical).

Posted Saturday night, July 10th, 2010
Posted Saturday night, July 10th, 2010
Posted Saturday night, July 10th, 2010
Posted Saturday night, July 10th, 2010
Posted Saturday night, July 10th, 2010
Posted Saturday night, July 10th, 2010
Posted Saturday night, July 10th, 2010





Open Source Conference 2024 Hokkaido NetBSD Booth and NetBSD BoF

Open Source Conference 2024 Kyoto NetBSD Booth & NetBSD BoF

Open Developers Conference 2024 NetBSD BoF

Kansai Open Forum 2024


Nagoya *BSD Users' Group monthly meeting

Past Events in 2024

Open Source Conference 2024 Nagoya NetBSD Booth & NBUG BoF

Open Source Conference 2024 Tokyo/Spring NetBSD Booth

Open Source Conference 2024 Online/Spring NetBSD BoF

Open Source Conference 2024 Osaka NetBSD Booth&BoF

Current my job mission

  • SOUM Corporation,TOKYO
  • Support Open Science Framework in Japan
  • Job offer: via SOUM Corporation.

make NetBSD booth and presentation around Japan area.

NetBSD machines in Japan

NetBSD Raspberry PI Images

NetBSD/pinebook status

update Facebook Page

update togetter Page

NetBSD Travel Guide for NetBSD booth

Japan NetBSD Users' Group

Nagoya *BSD Users' Group

Supporting AsiaBSDCon


Posted in the wee hours of Monday night, July 20th, 2010