Contents

Introduction

This quick tutorial provides a guideline on how to start creating new test programs and/or test cases, how these tests are tied to the NetBSD source tree and includes a short reference of the most commonly used functions.

You should start by reading the tests(7) manual page, which provides a user-level overview on how to run the tests included in NetBSD. While reading this tutorial, you may also want to refer to these pages on a need-to-know basis: atf-run(1), atf-report(1), atf-test-program(1), atf-c-api(3), atf-sh-api(3) and atf-check(1).

IMPORTANT: Do not take anything for granted, ESPECIALLY if you have previously worked with and/or have seen src/regress/. Your assumptions are most likely incorrect.

Test programs vs. test cases

So, what is what and how do you organize your tests?

A test case is a piece of code that exercises a particular functionality of another piece of code. Commonly, test cases validate the outcome of a particular source function or class method, the validity of the execution of a command with a particular combination of flags/arguments, etc. Test cases are supposed to be very concise, in the sense that they should just be testing one behavior.

A test program is a binary that collects and exposes a group of test cases. Typically, these test programs expose conceptually-related tests or all the tests for a particular source file.

In general, having many test programs with just one test case in them is wrong and smells from the previous layout of src/regress/. Think about some other organization. And don't blame atf for this separation: this is extremely common in (almost?) all other test frameworks and, when used wisely, becomes an invaluable classification.

For example, suppose you have the following fictitious source files for the ls tool:

  • bin/ls/fs.c: Provides the list_files() and stat_files() functions.

  • bin/ls/ui.c: Provides the format_columns() function.

  • bin/ls/main.c: The main method for ls.

Then, you could define the following test programs and test cases:

  • bin/ls/fs_test.c: Provides test cases for list_files and stat_files. These would be named list_files__empty_directory, list_files__one_file, list_files__multiple_files, stat_files__directory, stat_files__symlink, etc.

  • bin/ls/ui_test.c: Provides test cases for the format_columns function. These would be named format_columns__no_files, format_columns__multiple_files, etc.

  • bin/ls/integration_test.sh: Provides "black box" test cases for the binary itself. These would be named lflag, lflag_and_Fflag, no_flags, no_files, etc.

Try to keep your test case names as descriptive as possible so that they do not require comments to explain what they intend to test.

Test case parts

The head

The head is used for the sole purpose to define meta-data properties for the test case. (Eventually, this would not be specified programmatically, but is how we deal with the information right now.)

The following properties are commonly useful:

  • descr: A textual description of the purpose of the test case.

  • require.user: Set to 'root' to tell the atf runtime that this test requires root privileges. The test will later be skipped if you are running atf as non-root, and the test will be executed otherwise. Please do not use this unless absolutely necessary! You can most likely make your tests run as a regular user if you use puffs and rump.

  • use.fs: Set to 'true' if the test case creates temporary files in the "current directory". If set to false, the atf runtime will set the "current directory" to an unwritable directory, which will disallow the creation of the temporary files and will make your test mysteriously fail.

The body

The body is the actual meat of the test case. This is just a regular function that executes any code you want and calls special atf functions to report failures; see below.

In particular, be aware that the atf run-time isolates the execution of every test case to prevent side-effects (such as temporary file leftovers, in-memory data corruption, etc.). In particular:

  • A test case is always executed as a subprocess that is separate from the head. This implies that you cannot pass any in-memory state between the parts.

  • The current working directory of a test case is changed to a temporary location that gets cleaned up later on automatically. (Set the use.fs property to true in the head if you need to write to this temporary directory.)

  • The environment of the test case is "sanitized" to get rid of variables that can cause side-effects; e.g. LC_ALL, TZ, etc.

Running the test programs

Do:

$ cd /usr/tests/
$ atf-run | atf-report

Why?

Test programs get installed into the /usr/tests/ hierarchy. The main reason for doing that is to allow any user to test his system and to be able to convince himself that everything is working correctly.

Imagine that you install NetBSD-current on a public-facing machine that has some particular hardware only supported in the bleeding-edge source tree. In this scenario, you, as the administrator, could just go into /usr/tests/, run the tests and know immediately if everything is working correctly in your software+hardware combination or not. No need to rely on promises from the vendor, no need to deal with a source tree, no need to have a compiler installed...

So, that's the theory. Now, how does this map to our source tree?

At the moment, the source test programs are located somewhere under src/tests/. Say, for example, that you have the src/tests/bin/ls/ui_test.c source file. This Makefile in src/tests/bin/ls/ will take this source file and generate a ui_test binary. The Makefile will also generate an Atffile. Both files (the ui_test binary and the Atffile) will later be installed to /usr/tests/bin/ls/

Executing a single test

In general, you do not want to run a test program by hand. If you do so, you do not take advantage of any of the isolation provided by the atf runtime. This means that the test program will probably leave some temporary files behind or will raise some false negatives.

To run a test, use atf-run. In general:

$ atf-run | atf-report  # To run all the test programs in a directory.
$ atf-run some_test | atf-report  # To run only the some_test program.

The only "legitimate" case in which you should be running test cases by hand is to debug them:

$ gdb --args ./some_test the_broken_test_case

... but be sure to clean up any leftover files if you do that.

Executing tests during development

When you are in a subdirectory of src/tests/, you can generally run "make test" to execute the tests of that particular subdirectory. This assumes that the tests have been installed into the destdir.

Please note that this is only provided for convenience but it is completely unsupported. Tests run this way may fail mysteriously, and that is perfectly fine as long as they work from their canonical locations in /usr/tests.

Adding a new test

To add a new test case to the source tree, look for any test program in src/tests/ that can assimilate it. If you find such a program, just add the test case to it: no other changes are required so your life is easy. Otherwise, you will have to create a new test program.

To add a new test program to the source tree:

  1. Locate the appropriate subdirectory in which to put your test program. It is OK (and expected) to have multiple test programs into the same directory. Restrain yourself from creating one directory per test program.

If the subdirectory exists:

  1. Choose a sane name for the test program; the name must not be so specific that it restricts the addition of future test cases into it.

  2. Create the test program source file using one of the templates below. E.g. src/tests/tutorial/sample_test.c.

  3. Add the new test program to the Makefile.

If the subdirectory does not exist:

  1. Do the same as above.

  2. Create the Makefile for the directory using the templates below.

  3. Edit the parent Makefile to recurse into the new subdirectory.

  4. Edit src/etc/mtree/NetBSD.dist.tests to register the new subdirectory. Your test will be installed under /usr/tests/.

  5. Edit src/distrib/sets/lists/tests/mi to register the new test program. Do not forget to add .debug entries if your test program is a C/C++ binary.

Makefile template

Follow this template to create your Makefile:

.include <bsd.own.mk>

# This must always be defined.
TESTSDIR= ${TESTSBASE}/bin/ls

# These correspond to the test programs you have in the directory.
TESTS_C+= c1_test c2_test  # Correspond to c1_test.c and c2_test.c.
TESTS_SH+= sh1_test sh2_test  # Correspond to sh1_test.c and sh2_test.c

# Define only if your tests need any data files.
FILESDIR= ${TESTSDIR}
FILES= testdata1.txt testdata2.bin  # Any necessary data files.

.include <bsd.test.mk>

Atffile template

Atffiles are automatically generated by bsd.test.mk, so in general you will not have to deal with them.

What is an Atffile? An Atffile is the atf-run counterpart of a "Makefile". Given that atf tests do not rely on a toolchain, they cannot use make(1) to script their execution as the old tests in src/regress/ did.

The Atffiles, in general, just provide a list of test programs in a particular directory and the list of the subdirectories to descend into.

If you have to provide an Atffile explicitly because the automatic generation does not suit your needs, follow this format:

Content-Type: application/X-atf-atffile; version="1"

prop: test-suite = NetBSD

tp: first_test
tp: second_test
tp-glob: optional_*_test
tp: subdir1
tp: subdir2

C test programs

Template

The following code snippet provides a C test program with two test cases. The specific details as to how this works follow later:

#include <atf-c.h>

ATF_TC(tc, my_test_case);
ATF_TC_HEAD(tc, my_test_case)
{
    atf_tc_set_md_var(tc, "descr", "This test case ensures that...");
}
ATF_TC_BODY(tc, my_test_case)
{
    ATF_CHECK(true); /* Success; continue execution. */
    ATF_CHECK(false); /* Failure; continue execution. */

    ATF_CHECK_EQ(5, 2 + 2); /* Failure; continue execution. */
    ATF_REQUIRE_EQ(5, 2 + 2); /* Failure; abort execution. */

    if (!condition)
        atf_tc_fail("Condition not met!"); /* Abort execution. */
}

ATF_TC(tc, another_test_case);
ATF_TC_HEAD(tc, another_test_case)
{
    atf_tc_set_md_var(tc, "descr", "This test case ensures that...");
}
ATF_TC_BODY(tc, another_test_case)
{
    /* Do more tests here... */
}

ATF_TP_ADD_TCS(tp)
{
    ATF_TP_ADD_TC(tp, my_test_case);
    ATF_TP_ADD_TC(tp, another_test_case);
}

This program needs to be built with the Makefile shown below. Once built, the program automatically gains a main() method that provides a consistent user interface to all test programs. You are simply not intended to provide your own main method, nor to deal with the command-line of the invocation.

How to build

To build a C test program, append the name of the test program (without the .c extension) to the TESTS_C variable in the Makefile.

For example:

.include <bsd.own.mk>

TESTSDIR= ${TESTSBASE}/bin/ls

TESTS_C+= fs_test ui_test

.include <bsd.test.mk>

Common functions

The following functions are commonly used from within a test case body:

  • ATF_REQUIRE(boolean_expression): Checks if the given boolean expression is true and, if not, aborts execution and marks the test as failed. Similarly ATF_CHECK performs the same test but does not abort execution: it records the failure but keeps processing the test case. For an explanation on when to use which, refer to the FAQ question below.

  • ATF_REQUIRE_EQ(expected_expression, actual_expression): Checks if the two expressions match and, if not, aborts marking the test as failed. Similarly, ATF_CHECK_EQ records the error but does not abort execution.

  • ATF_REQUIRE_STREQ(expected_string, actual_string): Same as ATF_REQUIRE_EQ but performs string comparisons with strcmp.

  • atf_tc_skip(const char *format, ...): Marks the test case as skipped with the provided reason and exits.

  • atf_tc_fail(const char *format, ...): Marks the test case as failed with the provided reason and exits.

  • atf_tc_pass(void): Explicitly marks the test case as passed. This is implied when the test case function ends, so you should not use this in general.

  • atf_expect_fail(const char format, ...): Tells the atf runtime that the code following this call is expected to raise one or more failures (be it with atf_tc_fail, ATF_REQUIRE_, etc.). Use this to mark a block of code that is known to be broken (e.g. a test that reproduces a known bug). Use the string parameter to provide an explanation about why the code is broken; if possible, provide a PR number. Lastly, to terminate the "expected failure" code block and reset the runtime to the default functionality, use the atf_expect_pass() function.

  • atf_expect_death(const char *format, ...): Same as atf_expect_fail but expects an abrupt termination of the test case, be it due to a call to exit() or to the reception of a signal.

  • atf_expect_exit(int exitcode, const char *format, ...): Same as atf_expect_fail but expects the test case to exit with a specific exitcode. Provide -1 to indicate any exit code.

  • atf_expect_signal(int signo, const char *format, ...): Same as atf_expect_fail but expects the test case to receive a specific signal. Provide -1 to indicate any signal.

  • atf_expect_timeout(const char *reason, ...): Same as atf_expect_fail but expects the test case to get stuck and time out.

  • atf_tc_get_config_var("srcdir"): Returns the path to the directory containing the test program binary. This must be used to locate any data/auxiliary files stored alongside the binary.

  • RL(integer_expression, integer): Used to evaluate a call to a libc function that updates errno when it returns an error and to provide correct error reporting. The integer expression is the call to such function, and the literal integer provides the expected return value when there is an error. For example: RL(open("foo", O_RDONLY), -1). This would fail the test case if open returns -1, and would record the correct error message returned by libc.

Shell test programs

Template

The following code snippet provides a shell test program with two test cases. The details on how this works are provided later:

atf_test_case my_test_case
my_test_case_head() {
    atf_set "descr" "This test case ensures that..."
}
my_test_case_body() {
    touch file1 file2

    cat >expout <<EOF
file1
file2
EOF
    # The following call validates that the 'ls' command returns an
    # exit code of 0, that its stdout matches exactly the contents
    # previously stored in the 'expout' file and that its stderr is
    # completely empty.  See atf-check(1) for details, which is the
    # auxiliary tool invoked by the atf_check wrapper function.
    atf_check -s eq:0 -o file:expout -e empty 'ls'

    atf_check_equal 4 $((2 + 2))

    if [ 'a' != 'b' ]; then
        atf_fail "Condition not met!"  # Explicit failure.
    fi
}

atf_test_case another_test_case
another_test_case_head() {
    atf_set "descr" "This test case ensures that..."
}
another_test_case_body() {
    # Do more tests...
}

atf_init_test_cases() {
    atf_add_test_case my_test_case
    atf_add_test_case another_test_case
}

This program needs to be built with the Makefile shown below. The program automatically gains an entry point that provides a consistent user interface to all test programs. You are simply not intended to provide your own "main method", nor to deal with the command-line of the invocation.

How to build

To build a shell test program, append the name of the test program (without the .sh extension) to the TESTS_SH variable in the Makefile.

For example:

.include <bsd.own.mk>

TESTSDIR= ${TESTSBASE}/bin/ls

TESTS_SH+= integration_test something_else_test

.include <bsd.test.mk>

If you want to run the test program yourself, you should know that shell-based test programs are processed with the atf-sh interpreter. atf-sh is just a thin wrapper over /bin/sh that loads the shared atf code and then delegates execution to your source file.

Common functions

The following functions are commonly used from within a test case body:

  • atf_check: This is probably the most useful function for shell-based tests. It may need some experience to get it right, but it allows, in one line, to check the execution of a command. Where check means: validate exit code, stdout and stderr. This is just a wrapper over atf-check, so please refer to atf-check(1) for more details.

  • atf_check_equal value1 value2: Check that the two values are equal and, if not, abort execution.

  • atf_expect_*: Same as their C counterparts; see above.

  • atf_fail reason: Explicitly marks the test case as failed and aborts it.

  • atf_skip reason: Explicitly marks the test case as skipped and exits.

  • atf_pass: Explicitly marks the test case as passed and exits.

  • atf_get_srcdir: Prints the path to the directory where the test case lives. Use as $(atf_get_srcdir)/my-static-data-file.

FAQ

How do I atfify a plain test program?

Let's suppose you have a program to exercise a particular piece of code. Conceptually this implements a test but it does not use atf at all. For example:

#include <err.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

/* This test program exercises the snprintf function. */

int main(void)
{
    char buf[1024];

    printf("Testing integers");
    snprintf(buf, sizeof(buf), "%d", 3);
    if (strcmp(buf, "3") != 0)
        errx(EXIT_FAILURE, "%d failed");
    snprintf(buf, sizeof(buf), "a %d b", 5);
    if (strcmp(buf, "a 5 b") != 0)
        errx(EXIT_FAILURE, "%d failed");

    printf("Testing strings");
    snprintf(buf, sizeof(buf), "%s", "foo");
    if (strcmp(buf, "foo") != 0)
        errx(EXIT_FAILURE, "%s failed");
    snprintf(buf, sizeof(buf), "a %s b", "bar");
    if (strcmp(buf, "a bar b") != 0)
        errx(EXIT_FAILURE, "%s failed");

    return EXIT_SUCCESS;
}

To convert this program into an atf test program, use the template above and keep this in mind:

  • Split the whole main function into separate test cases. In this scenario, the calls to printf(3) delimit a good granularity for the test cases: one for the integer formatter, one for the string formatter, etc.

  • Use the ATF_CHECK* and/or atf_tc_fail functions to do the comparisons and report errors. Neither errx nor any other error reporting and program termination functions (read: err, errx, warn, warnx, exit, abort) are to be used at all.

The result would look like:

#include <atf-c.h>
#include <stdio.h>

ATF_TC(tc, integer_formatter);
ATF_TC_HEAD(tc, integer_formatter)
{
    atf_tc_set_md_var(tc, "descr", "Validates the %d formatter");
}
ATF_TC_BODY(tc, integer_formatter)
{
    char buf[1024];

    snprintf(buf, sizeof(1024), "%d", 3);
    ATF_CHECK_STREQ("3", buf);

    snprintf(buf, sizeof(1024), "a %d b", 5);
    ATF_CHECK_STREQ("a 5 b", buf);
}

ATF_TC(tc, string_formatter);
ATF_TC_HEAD(tc, string_formatter)
{
    atf_tc_set_md_var(tc, "descr", "Validates the %s formatter");
}
ATF_TC_BODY(tc, string_formatter)
{
    char buf[1024];

    snprintf(buf, sizeof(1024), "%s", "foo");
    ATF_CHECK_STREQ("foo", buf);

    snprintf(buf, sizeof(1024), "a %s b", "bar");
    ATF_CHECK_STREQ("a bar b", buf);
}

ATF_TP_ADD_TCS(tp)
{
    ATF_TP_ADD_TC(tp, integer_formatter);
    ATF_TP_ADD_TC(tp, string_formatter);
}

Which can later be invoked as any of:

$ atf-run snprintf_test | atf-report  # Normal execution method.
$ ./snprintf_test integer_formatter  # For DEBUGGING only.
$ ./snprintf_test string_formatter  # For DEBUGGING only.

How do I write a test case for an unfixed PR?

Use the "expectations" mechanism to define part of the test case as faulty, crashy, etc. This is for two reasons:

  • As long as the bug still exists, the test case will be reported as an "expected failure". Such expected failures do not count towards the success or failure of the whole test suite.

  • When the bug gets fixed, the bug will not trigger any more in the test case, and thus the expectation of failure will not be met any more. At this point the test case will start raising a regular failure, which is usually addressed by just removing the expect_* calls (but add a comment with the PR number!).

For example, suppose we have PR lib/1 that reports a condition in which snprintf() does the wrong formatting when using %s, and PR lib/2 that mentions that another snprintf() call using %d with number 5 causes a segfault. We could do:

#include <atf-c.h>
#include <signal.h>
#include <stdio.h>

ATF_TC(tc, integer_formatter);
ATF_TC_HEAD(tc, integer_formatter)
{
    atf_tc_set_md_var(tc, "descr", "Tests the %d formatter for snprintf");
}
ATF_TC_BODY(tc, integer_formatter)
{
    char buf[1024];

    snprintf(buf, sizeof(buf), "Hello %d\n", 1);
    ATF_CHECK_STREQ("Hello 1", buf);

    atf_tc_expect_signal(SIGSEGV, "PR lib/2: %%d with 5 causes a crash");
    snprintf(buf, sizeof(buf), "Hello %d\n", 5);
    atf_tc_expect_pass();
    ATF_CHECK_STREQ("Hello 5", buf);
}

ATF_TC(tc, string_formatter);
ATF_TC_HEAD(tc, string_formatter)
{
    atf_tc_set_md_var(tc, "descr", "Tests the %s formatter for snprintf");
}
ATF_TC_BODY(tc, string_formatter)
{
    char buf[1024];

    snprintf(buf, sizeof(buf), "Hello %s\n", "world!");
    atf_tc_expect_failure("PR lib/1: %%s does not work");
    ATF_CHECK_STREQ("Hello world!", buf);
    atf_tc_expect_pass();
}

ATF_TP_ADD_TCS(tp)
{
    ATF_TP_ADD_TC(tp, integer_formatter);
    ATF_TP_ADD_TC(tp, string_formatter);
}

Do I need to remove temporary files?

No. atf-run does this automatically for you, because it runs every test program in its own temporary subdirectory.

When do I use ATF_CHECK and when ATF_REQUIRE?

ATF_CHECK logs errors but does not abort the execution of the test program. ATF_REQUIRE logs errors in a similar way but immediately terminates the execution.

You can use this distinction in the following way: use ATF_REQUIRE to check the code that "prepares" your test case. Use ATF_CHECK to do the actual functionality tests once all the set up has been performed. For example:

ATF_TC_BODY(getline) {
    FILE *f;
    char buf[1024];

    /* Opening the file is not part of the functionality under test, but it
     * must succeed before we actually test the relevant code. */
    ATF_REQUIRE((f = fopen("foo")) != NULL);

    ATF_CHECK(getline(f, buf, sizeof(buf)) > 0);
    ATF_CHECK_STREQ("line 1", buf);

    ATF_CHECK(getline(f, buf, sizeof(buf)) > 0);
    ATF_CHECK_STREQ("line 2", buf);
}
Posted late Wednesday evening, September 1st, 2010