No videos are available yet to provide much-needed context to presentations, but we'll keep you posted.
Day -2 - Arrival in Vienna
After being thoroughly delayed by Deutsche Bahn, I hopped off an InterCity Express train to check out the hotel room for people speaking at EuroBSDCon, which was An Experience in itself. There was a mural of a shirtless man with a sword covered in snakes next to my bed, what else do you need in life? Lots of coffee, obviously.
Begin the march to the conference to listen to Marshall Kirk McKusick lecture on schedulers.
Day -1 - NetBSD Developer Summit
Around 16 NetBSD developers gathered in a room for the first time in two years. I was a little bit distracted and late due to Marshall Kirk McKusick's very detailed lecture on filesystems melting my brain somewhat, but we had the opportunity to present various informal presentations, after we'd finished showing off suspend/resume support on our ThinkPad laptops.
Benny Siegert opened with a presentation on the state of the Go programming language on NetBSD (and whether it is "in trouble"), covering various problems with instability being detected inside the Go test suite. Go is particularly interesting (and maybe error-prone) because it mostly bypasses NetBSD libc, which is unusual for software running on NetBSD, instead preferring to implement its own wrappers around the kernel's system calls.
A few problems had been narrowed down to being (likely) AMD CPU bugs, others weren't reproducible in production (outside of the test suite) at all, and others may have been fixed in NetBSD 9.1 - the NetBSD machines running tests for Go do need to be updated. If you're from AMD, please get in touch.
We've got a very impressive test suite for NetBSD itself, but outside tests are always useful for identifying problems that we can't catch... that said, they do require a lot of work to maintain, and a lack of patience is understandable. We'd love any help we can get with this.
I pointed out that we get occasional failures bootstrapping Go in pkgsrc, and better debug output would be nice -- Benny was able to arrange this within the day, and we should get nice detailed bootstrapping logs for Go now.
Pierre Pronchery (khorben@) discussed cross-BSD collaboration on synchronizing our device driver code bases, including his recent NetBSD Foundation-supported work on the emuxki(4) sound card driver, where other BSDs have taken the same code base but improvements had not yet been universal. We all agreed that collaboration and keeping drivers in sync is important. We talked about the on-going project to synchronize NetBSD Wi-Fi drivers with FreeBSD.
Martin Kjellstrand then gave us a very nice demonstration of his NetBSD docker images, and how easy it is to spin up NetBSD on-demand to run a command (this also has wide potential for being useful for testing). In turn, I rambled a bit about my own experiments of dynamically creating NetBSD images. This would lead to a later discussion about whether we need to prioritize improving the resize_ffs(8) command's support for new filesystems.
The theme of creating NetBSD images "for the cloud" continued, with Benny Siegert presenting again about NetBSD on Google Compute Engine.
Stephen Borrill then stepped up to give us an incredibly detailed history of the British computer company Acorn Computers, complete with his personal experiences servicing Acorn machines in the early 90s. We discussed the history of the ARM CPU, and NetBSD/acorn32.
Nia Alarie (surprise) finished up with a very short unplanned demonstration of some of the projects she's been working on lately - using NetBSD as a professional digital audio workstation, improving the default graphical experience of NetBSD with dynamically generated menus, and (again) creating customized micro-images of NetBSD. We discussed support for MIDI devices (I'd later chat with some of the FreeBSD people about collaborating on JACK MIDI).
We then retired to Thomas Klausner (wiz@)'s favorite ramen restaurant and discussed, among other things, Studio Ghibli films, and trains. Trains would be a recurring theme.
Day 0 - start of talks
We began the day with two NetBSD presentations scheduled back-to-back. This mostly meant that I got to talk about some of NetBSD 10's upcoming features, and why it's taking so long to a small crowd of interested people who didn't have much prior experience with NetBSD, while in another room Taylor R. Campbell (riastradh@) discussed his very dedicated efforts to make suddenly disappearing devices more reliable and not crash the kernel (we're still waiting for a live demonstration).
Next, Pierre Pronchery (khorben@) discussed the power of pkgsrc for creating consistent environments across platforms for software developers, serving as a nice portable, classic Unix alternative to technologies like Docker and Nix.
The final presentation of the day was riastradh@ again, this time providing a live lecture (from Emacs!) about memory barriers in the kernel. We all learned to appreciate the nice abstractions technologies like mutexes provide to stop CPUs from re-ordering code on multi-processor machines in inexplicable ways.
Day 1 - final talks
The second day of EuroBSDCon presentations was mostly devoid of anything NetBSD-focused, so we had a nice opportunity for cross-pollination and to learn and collaborate with other BSD projects. I chatted a bit with an OpenBSD Ports developer about the challenge technologies like Rust pose to developing a cross-architecture packaging system, and with a FreeBSD person about the state of professional audio on our respective platforms. Michael Dexter finished the day of presentations with a very passionate speech about why we all need BSD in our lives, regardless of our preferred flavour.
More topics were discussed in the various break periods, including whether our newest update to the GPU drivers is stable enough to include in a release (verdict: works for me).
We then watched as various BSD t-shirts and boxes of chocolates were auctioned away to support a local refugee center. The organizing committee forgot to include the NetBSD Foundation on the list of sponsors, but we forgive them.
Other news from the Project
I've recently made sure the NetBSD 10 changelog is up to date with all the new goodness, so you should check that out.
Prologue
When I bought my house in 2004 I went shopping for a outside thermometer - and ended up with a full weather-station instead (a WS2300). When I unpacked it I found a serial cable inside...
Long story short - I was still in the process of recabling the house (running ethernet to every room) and added a serial cable from the machine room to the WS2300, and then did some pkgsrc work and got misc/open2300 and misc/open2300-mysql. I used those to log the data from the weather-station to a mysql database, and later moved that (via misc/open2300-pgsql) to a postgres database.
Now sometime this year the machine running that database had to be replaced (should have done that earlier, it was power hungry and wasteful). The replacement was an aarch64 SoC (a Pine64 Quartz64 model A) - and it had no real com ports (of course) any more. I had experimented with USB serial adapters and the WS2300 before, but for unclear reasons this time I had no luck and couldn't get it to work. Since some of the outdoor sensors of the old weather-station had started failing, I decided to replace it.
New Weather-Station, new Sensors
I picked a WS3500 because it comes with a nice remote sensor arrangement:

I attached it to a satellite dish mount about 1.2m above my garage and ran a two wire cable through the mount to supply it with 3V and get rid of any batteries. It does not have a connector for that, but the battery compartment had enough space for a 330µF elco and soldering that and the cable directly to the battery contacts was easy.
The sensors report to the weather-station via a proprietary protocol in the 868 MHz band.
New Weather-Station, new Reporting
The weather-station can connect to a wifi network but does not offer any services itself. The app used to configure the station offers several predefined weather collection services.
I found the idea a bit strange to have my local weather data logged to some server somewhere else in the cloud and then get it back via my browser, but for others this is a good thing. I found this article that describes exactly the remote-only, no machines required on-site setup. I used that article as inspiration for the data collection (but that part turned out to be quite trivial, see below) and copied a lot of the presentation site from it (also more details below).
So in my setup I created web servers on two dedicated ports of my tiny machine running the postgres server. One is used by the weather-station for reporting the data, the other is used to query the database.
The configuration of the weather-station for a custom server was easy:


I tested the ecowitt protocol first. It uses a post to a fixed URL and the form data has nearly identical data as we get with the solution I ended up with - only a few names (of form fields) are slightly different.
The blacked items "StationID" and "StationKey" appear verbatim in the reported data, you can set them to whatever you want - the scripts below do not check them.
The weather underground protocol does a simple http GET and provides all data as query parameters (I had to add the trailing question mark in the configuration). This makes it very easy to extract the data in a script on the server side.
But lets get there step by step. NetBSD comes with a http/https server in base, originally called "bozohttpd". It is very lightweight, but it can run various types of scripts - I picked the plain old simple CGI and /bin/sh as language, using a bit of awk to convert units.
First I added two users, so I could separate file access rights. This is how they look like in vipw:
weatherupdate:*************:1004:1004::0:0:Weather Update Service:/weather/home:/sbin/nologin weatherquery:*************:1005:1004::0:0:Weather Query Service:/weather/query:/sbin/nologinand two httpd instances for them
/etc/inetd
entry to collect the incoming data:
88 stream tcp nowait:600 weatherupdate /usr/libexec/httpd httpd -q -c /weather/cgi /weather/files 89 stream tcp nowait:600 weatherquery /usr/libexec/httpd httpd -q -c /weather/cgi -M .js "text/javascript" - - /weather/files
The document root (/weather/files
) would not be used for the instance on port 88, but httpd needs one.
Note that these lines use the quiet flag ("-q") which is only available in netbsd-current. You can replace it with "-s" for older versions.
The home directories of both users are mostly empty, besides a .pgpass
file that contains the password for this
user connection to the postgres server. They look like this:
127.0.0.1:5432:weatherhistory:open2300:xxxxxxxxxxxxxx
where "weatherhistory" is the datebase and "open2300" is the name of the postgres user for the update script and the password is x-ed out. The other file looks very similar:
127.0.0.1:5432:weatherhistory:weatherquery:xxxxxxxxxxx
At the postgres level the user "weatherquery" needs to have SELECT privilege on the table "weather", and "open2300" needs to have INSERT privilege. The table schema (output of "pg_dump -s") looks like this:
-- -- Name: weather; Type: TABLE; Schema: public; Owner: weathermaster -- CREATE TABLE public.weather ( "timestamp" timestamp without time zone DEFAULT '1970-01-01 00:00:00'::timestamp without time zone NOT NULL, temp_in double precision DEFAULT '0'::double precision NOT NULL, temp_out double precision DEFAULT '0'::double precision NOT NULL, dewpoint double precision DEFAULT '0'::double precision NOT NULL, rel_hum_in integer DEFAULT 0 NOT NULL, rel_hum_out integer DEFAULT 0 NOT NULL, windspeed double precision DEFAULT '0'::double precision NOT NULL, wind_angle double precision DEFAULT '0'::double precision NOT NULL, wind_chill double precision DEFAULT '0'::double precision NOT NULL, rain_1h double precision DEFAULT '0'::double precision NOT NULL, rain_24h double precision DEFAULT '0'::double precision NOT NULL, rain_total double precision DEFAULT '0'::double precision NOT NULL, rel_pressure double precision DEFAULT '0'::double precision NOT NULL, wind_gust double precision DEFAULT 0 NOT NULL, light double precision DEFAULT 0 NOT NULL, uvi double precision DEFAULT 0 NOT NULL ); ALTER TABLE public.weather OWNER TO weathermaster; -- -- Name: weather weather_pkey; Type: CONSTRAINT; Schema: public; Owner: weathermaster -- ALTER TABLE ONLY public.weather ADD CONSTRAINT weather_pkey PRIMARY KEY ("timestamp"); -- -- Name: TABLE weather; Type: ACL; Schema: public; Owner: weathermaster -- GRANT INSERT ON TABLE public.weather TO open2300; GRANT SELECT ON TABLE public.weather TO weatherquery;
As noted above, I carried this database over (with minor modifications) from previous instances of the whole setup - so it may not be optimal or elegant. One thing that needs special attention is the "timestamp" column - it carries date/time in UTC and has no timezone associated. This looked like a natural choice, but has some unexpected consequences. When querying data in JSON format, "timestamp" will not get the JavaScript marker for "UTC", a "Z" suffix. So in the JavaScript code in the web pages you will find quite a few places that cover up for this.
Now when the weather station sends data to the configured server, inetd(8) runs httpd(8) and that invokes a shell script /weather/cgi/update.cgi
as the "weatherupdate" user.
This script uses awk(1) to do a few unit conversions and output a SQL command to insert the data into the "weather" table.
This SQL command is then piped to psql(1) with the connection string passed on the command line.
The corresponding password is found in ~/.pgpass
of the "weatherupdate" user.
The script looks like this:
#! /bin/sh TZ=UTC; export TZ awk -v $( echo "$QUERY_STRING" | sed 's/\&/ -v /g' ) 'BEGIN { temp=(tempf-32)/1.8; indoortemp=(indoortempf-32)/1.8; dewpt=(dewptf-32)/1.8; windchill=(windchillf-32)/1.8; windspeed=windspeedmph*1.609344; windgust=windgustmph*1.609344; rain=rainin*25.4; dailyrain=dailyrainin*25.4; totalrain=totalrainin*25.4; rel_preasure=baromin/0.029529980164712; printf("INSERT INTO weather VALUES ('"'"'%s'"'"', %f, %f, %f, %d, %d, %f, %d, %f, %f, %f, %f, %f, %f, %f, %f);\n", strftime("%F %T"), indoortemp, temp, dewpt, indoorhumidity, humidity, windspeed, winddir, windchill, rain, dailyrain, totalrain, rel_preasure, windgust, solarradiation, UV); }' | psql "hostaddr='127.0.0.1'dbname='weatherhistory'user='open2300'" > /dev/null 2>&1
Note that it explicitly sets the timezone to UTC. The input data comes (as defined by CGI) via the QUERY_STRING environment variable, as a set of "field=value" items, separated by &. They are converted to sets of "-v" args for the awk invocation via a simple sed script.
With this in place, the weather-station adds a record every five minutes to the database, and it was fun to check it via SQL, but for reasons not quite clear to me most of the rest of the family did not like that kind of access very much.
psql (14.5) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. weatherhistory=> select min(temp_out), max(temp_out) from weather; min | max -------+------ -18.1 | 80.9 (1 row)
I initially thought the 80.9°C were measured while I was soldering the power cable, but apparently they were fallout from the sometimes failing sensors of the old station. The database has 2840 rows with temp_out > 40°C and all of them are 80.something. I should replace them with an average of the neighbor records.
Presenting the data
So I needed an internal web site. Which needs access to the data.
The above setup already paved the way for that, via the second port I set up.
I wanted to show all the current data in one page, and variable history data on another - which meant two CGI scripts to query the data.
The /weather/cgi/latest.cgi
script just fetches the last record logged and creates a JSON from it, and also uses pom(6) and the sunwait(1) program
from pkgsrc to supply some site and date specific data:
#! /bin/sh PATH=/usr/games:/usr/pkg/bin:$PATH GEOPOS="51.505554N 0.075278W" # geographic position of this weather station UPDATE=300 # seconds between updates # This script uses psql(1) from pkgsrc/databases/postgresql14-client, # pom(6) from the NetBSD games set and pkgsrc/misc/sunwait. # collect global site data: sunrise and friends eval $( sunwait report ${GEOPOS} | awk -F": " ' /Sun directly north/ { printf("zenith=\"%s\"\n", $2); } /Daylight:/ { split($2,v," to "); printf("sunrise=\"%s\"\nsunset=\"%s\"\n", v[1], v[2]); } /with Civil twilight:/ { split($2,v," to "); printf("dawn=\"%s\"\ndusk=\"%s\"\n", v[1], v[2]); } /It is: Day/ { printf("day=true\n"); } /It is: Night/ { printf("day=false\n"); } ' ) # moon phase eval $( pom | awk '-F(' ' /The Moon is Full/ { printf("moontrend=\"-\"\nmoon=100\n"); } /The Moon is New/ { printf("moontrend=\"+\"\nmoon=0\n"); } /First Quarter/ { printf("moontrend=\"+\"\nmoon=50\n"); } /Last Quarter/ { printf("moontrend=\"-\"\nmoon=50\n"); } /Waxing/ { a=$0; sub(/^.*\(/, "", a); sub(/%.*$/, "", a); printf("moontrend=\"+\"\nmoon=%d\n", a+0); } /Waning/ { a=$0; sub(/^.*\(/, "", a); sub(/%.*$/, "", a); printf("moontrend=\"-\"\nmoon=%d\n", a+0); } ' ) # start the json output printf "\n\n{ \"site\": { \"updates\": ${UPDATE}, \"dawn\": \"${dawn}\", \"sunrise\": \"${sunrise}\", \"zenith\": \"${zenith}\", \"day\": ${day}, \"sunset\": \"${sunset}\", \"dusk\": \"${dusk}\", \"moon\": { \"trend\": \"${moontrend}\", \"percent\": ${moon} }\n}, \"weather\":\n" # fill database results printf "WITH t AS ( SELECT * FROM weather ORDER BY timestamp DESC LIMIT 1 ) SELECT row_to_json(t) FROM t;\n" | psql --tuples-only --no-align "hostaddr='127.0.0.1'dbname='weatherhistory'user='weatherquery'" # terminate json printf "\n}\n"
As you can see, if you would restrict output to plain data from the database, the script would be only four or five lines long. But I like the additional spicing.
The /weather/cgi/history.cgi
script fetches rows between two timestamps passed to it (in JSON timestamp format)
and answers with a JSON containing an array of all the data in the requested time window:
#! /bin/sh COND=$( echo "${QUERY_STRING}" | tr '&' '\n'| sed -e 's/%22/\"/g' -e 's/%3A/:/g' | awk ' /from=/ { v=$0; sub(/^[^"]*\"/, "", v); sub(/\".*$/, "", v); arg_from=v; } /to=/ { v=$0; sub(/^[^"]*\"/, "", v); sub(/\".*$/, "", v); arg_to=v; } END { if (arg_from && arg_to) { printf("timestamp >= '"'"'%s'"'"' AND timestamp <= '"'"'%s'"'"'\n", arg_from, arg_to); } } ' ) if [ -z "${COND}" ]; then # printf "could not parse: ${QUERY_STRING}\n" >> /tmp/sql.log exit 0; fi # start output printf "\n\n" # printf "${COND}\n" >> /tmp/sql.log # fill database results printf "WITH t AS ( SELECT * FROM weather WHERE ${COND} ORDER by timestamp ASC ) SELECT json_agg(t) FROM t;\n" | psql --tuples-only --no-align "hostaddr='127.0.0.1'dbname='weatherhistory'user='weatherquery'" # 2&>> /tmp/sql.err
Fetching this data now is easy in JavaScript.
We have a request URL defined as a const, like this:
const queryURL = 'http://weatherhost.duskware.de:89/cgi-bin/history.cgi?';
and then add (if needed) the paramaters for the query, like in this example function that gets passed a from-date and a to-date:
function showData(fromD, toD) { var url = new URL(queryURL); url.searchParams.append("from", '"'+fromD.toJSON()+'"'); url.searchParams.append("to", '"'+toD.toJSON()+'"'); fetch(url).then(function(response) { return response.json(); }).then(function(data) { makeGraphs(data); updateButtons(); }).catch(function(error) { console.error(error) }); }
When the answer from the server arrives, it is decoded as JSON and returned as input data to the next function that makes some graphs from the data array. Finally a few buttons are updated (in this example the time window is put into a start and a end date control.
Inspired by the post mentioned above I used canvas gauges for the display of the latest data and dygraphs for the display of historic data.
Here is an example of how the latest display looks:

And here is how the history display looks:

I have put an archive of the cgi scripts and web pages here, and also for the curious who just want to peek at the full glory of my web design skills the start page (showing the latest weather data) and the history page.
Besides those files, you will need
- a
/weather/files/favicon.ico
if you like. - download
gauge.min.js
from canvas gauges and put it into/weather/files/
. - download
dygraph.css
,dygraph.min.js
from dygraph, plussynchronizer.js
from the dygraphextras/
directory and put it also into/weather/files/
.
Then you should be ready to go - easy, isn't it? And no heavy weight dependencies or pkgs needed.
What about other weather stations?
There are quite a few similar weather stations out there now that seem to run "related" firmware and have similar capabilities. Most likely the update script (and details in the presentation pages) will need adjustements for other types.
If you start with a different device, just log all the data it sends and adjust the cgi scripts/database/JavaScript accordingly. For protocol analyzis there are several easy means:
- Remove the "-q" flag in the httpd command (in
/etc/inetd.conf
) and check/var/log/xferlog
for the quey paramaters sent by the weather station (when using the weather underground protocol). - Make the station log to a
debug.cgi
first to capture everything (including form data posted). This works for the ecowitt protocoll. - All this stations seem to use http only (not https), so you can sniff the traffic. Use
tcpdump -w
on the server to capture the data and analyze it with net/wireshark from pkgsrc.
Here is what a debug.cgi script could look like:
#! /bin/sh env > /tmp/debug.env printf "\n\nOK\n" cat > /tmp/debug.input &
This allows you to see the form input in /tmp/debug.input and the CGI environment in /tmp/debug.env.