File:  [NetBSD Developer Wiki] / wikisrc / tutorials / atf.mdwn
Revision 1.5: download - view: text, annotated - select for diffs
Fri Sep 3 15:04:05 2010 UTC (7 years, 10 months ago) by jmmv
Branches: MAIN
CVS tags: HEAD
Set a title (because using # for the page title is wrong) and fix the other
titles so that the toc DTRT.

[[!meta title="Creating atf-based tests for NetBSD src"]]
[[!toc ]]

# Creating atf-based tests for NetBSD src

This quick tutorial is an attempt to workaround the lack of proper documentation
in atf.  The tutorial provides a guideline on how to start creating new test
programs and/or test cases, how these tests are tied to the NetBSD source tree
and a short reference of the most commonly used functions.

You should start by reading the
[tests(7)]( manual
page, which is probably the only sane document in the whole documentation.  Any
other attempts at reading the atf-* manual pages are probably doomed unless you
are already familiar with atf itself and its internals.  Still, you may be able
to get some useful information out of

**IMPORTANT: Do not take anything for granted, SPECIALLY if you have previously
worked with and/or have seen src/regress/.  Your assumptions are most likely

# Test programs vs. test cases

So, what is what and how do you organize your tests?

A **test case** is a piece of code that exercises a particular functionality of
another piece of code.  Commonly, test cases validate the outcome of a
particular source function or class method, the validity of the execution of a
command with a particular combination of flags/arguments, etc.  Test cases are
supposed to be very concise, in the sense that they should just be testing *one

A **test program** is a binary that collects and exposes a group of test cases.
Typically, these test programs expose conceptually-related tests or all the
tests for a particular source file.

In general, having many test programs with **just one test case** in them is
**wrong** and smells from the previous layout of src/regress/.  Think about some
other organization.  And don't blame atf for this separation: this is extremely
common in (almost?) all other test frameworks and, when used wisely, becomes an
invaluable classification.

For example, suppose you have the following fictitious source files for the ls

* bin/ls/fs.c: Provides the list_files() and stat_files() functions.

* bin/ls/ui.c: Provides the format_columns() function.

* bin/ls/main.c: The main method for ls.

Then, you could define the following test programs and test cases:

* bin/ls/fs_test.c: Provides test cases for list_files and stat_files.  These
  would be named list_files__empty_directory, list_files__one_file,
  list_files__multiple_files, stat_files__directory, stat_files__symlink, etc.

* bin/ls/ui_test.c: Provides test cases for the format_columns function.  These
  would be named format_columns__no_files, format_columns__multiple_files, etc.

* bin/ls/ Provides "black box" test cases for the binary
  itself.  These would be named lflag, lflag_and_Fflag, no_flags, no_files, etc.

Try to keep your test case names as descriptive as possible so that they do not
require comments to explain what they intend to test.

# Test case parts

A test case is composed by three parts: the *head*, the *body* and the
*cleanup*.  Only the body is required; the other two routines are optional.

## The head

The *head* is used **for the sole purpose** to define meta-data properties for
the test case.  (Eventually, this would not be specified programmatically, but
is how we deal with the information right now.)

The following properties are commonly useful:

* descr: A textual description of the purpose of the test case.

* require.user: Set to 'root' to mark the test case as root-specific.  It is
  nice not to abuse this; see puffs and rump.

* use.fs: Set to 'true' if the test case creates temporary files in the "current
  directory".  Otherwise the atf runtime will isolate the test case in such a
  way to forbid this, which will misteriously make your test to fail.

## The body

The *body* is the actual meat of the test case.  This is just a regular function
that executes any code you want and calls special atf functions to report
failures; see below.

In particular, be aware that the atf run-time **isolates** the execution of
every test case to prevent side-effects (such as temporary file leftovers,
in-memory data corruption, etc.).  In particular:

* A test case is **always executed as a subprocess** that is separate from the
  head and the cleanup.

* The current working directory of a test case is changed to a temporary
  location that gets cleaned up later on automatically.  (Set the use.fs
  property to true in the head if you need to write to this temporary

* The environment of the test case is "sanitized" to get rid of variables that
  can cause side-effects; e.g. LC_ALL, TZ, etc.

# Installation of test programs: the why and the where

Test programs get installed into the /usr/tests/ hierarchy.  The main reason for
doing that is to allow *any* user to test his system and to be able to convince
himself that everything is working correctly.

Imagine that you install NetBSD-current on a public-facing machine that has some
particular hardware only supported in the bleeding-edge source tree.  In this
scenario, you, as the administrator, could just go into /usr/tests/, run the
tests and know immediately if everything is working correctly in your
software+hardware combination or not.  No need to rely on promises from the
vendor, no need to deal with a source tree, no need to have a compiler

So, that's the theory.  Now, how does this map to our source tree?

At the moment, the source test programs are located somewhere under src/tests/.
Say, for example, that you have the src/tests/bin/ls/ui_test.c source file.
This Makefile in src/tests/bin/ls/ will take this source file and generate a
ui_test binary.  The Makefile will also generate an Atffile.  Both files (the
ui_test binary and the Atffile) will later be installed to /usr/tests/bin/ls/

# Adding a new test

To add a new *test case* to the source tree, look for any test program in
src/tests/ that can assimilate it.  If you find such a program, just add the
test case to it: no other changes are required so your life is easy.  Otherwise,
you will have to create a new test program.

To add a new *test program* to the source tree:

1. Locate the appropriate subdirectory in which to put your test program.  It is
OK (and **expected**) to have multiple test programs into the same directory.
**Restrain yourself from creating one directory per test program.**

If the subdirectory exists:

1. Choose a sane name for the test program; the name must not be so specific
   that it restricts the addition of future test cases into it.

1. Create the test program source file using one of the templates below.
   E.g. src/tests/tutorial/sample_test.c.

1. Add the new test program to the Makefile.

If the subdirectory does not exist:

1. Do the same as above.

1. Create the Makefile for the directory using the templates below.

1. Edit the parent Makefile to recurse into the new subdirectory.

1. Edit src/etc/mtree/NetBSD.base.dist to register the new subdirectory.  Your
   test will be installed under /usr/tests/.

1. Edit src/distrib/sets/lists/tests/mi to register the new test program.  Do
   not forget to add .debug entries if your test program is a C/C++ binary.

## Makefile template

    # $NetBSD: atf.mdwn,v 1.5 2010/09/03 15:04:05 jmmv Exp $

    .include <>

    # This must always be defined.

    # Define only the variables you actually need for the directory.
    TESTS_C+= c1_test c2_test  # Correspond to c1_test.c and c2_test.c.
    TESTS_SH+= sh1_test sh2_test  # Correspond to sh1_test.c and sh2_test.c

    # Define only if your tests need any data files.
    FILES= testdata1.txt testdata2.bin  # Any necessary data files.

    .include <>

## Atffile template

What is an Atffile?  An Atffile is the atf-run counterpart of a "Makefile".
Given that atf tests *do not rely on a toolchain*, they cannot use make(1) to
script their execution as the old tests in src/regress/ did.

The Atffiles, in general, just provide a list of test programs in a particular
directory and the list of the subdirectories to descend into.

Atffiles are automatically generated by, so in general you will not
have to deal with them.  However, if you have to provide one explicitly, they
follow the following format:

    Content-Type: application/X-atf-atffile; version="1"

    prop: test-suite = NetBSD

    tp: first_test
    tp: second_test
    tp-glob: optional_*_test
    tp: subdir1
    tp: subdir2

# C test programs

## Template

The following code snippet provides a C test program with two test cases:

    #include <atf-c.h>

    ATF_TC(tc, my_test_case);
    ATF_TC_HEAD(tc, my_test_case)
        atf_tc_set_md_var(tc, "descr", "This test case ensures that...");
    ATF_TC_BODY(tc, my_test_case)
        ATF_CHECK(returns_a_boolean()); /* Non-fatal test. */
        ATF_REQUIRE(returns_a_boolean()); /* Non-fatal test. */

        ATF_CHECK_EQ(4, 2 + 2); /* Non-fatal test. */
        ATF_REQUIRE_EQ(4, 2 + 2); /* Fatal test. */

        if (!condition)
            atf_tc_fail("Condition not met!"); /* Explicit failure. */

    ATF_TC_WITHOUT_HEAD(tc, another_test_case);
    ATF_TC_BODY(tc, another_test_case)
        /* Do more tests here... */

        ATF_TP_ADD_TC(tp, my_test_case);
        ATF_TP_ADD_TC(tp, another_test_case);

This program needs to be linked against libatf-c as described below.  Once
linked, the program automatically gains a main() method that provides a
consistent user interface to all test programs.  You are simply not inteded to
provide your own main method, nor to deal with the command-line of the

## How to build

To build a C test program, append the name of the test program (without the .c
extension) to the TESTS_C variable in the Makefile.

For example:

    .include <>


    TESTS_C+= fs_test ui_test

    .include <>

## Common functions

The following functions are commonly used from within a test case body:

* ATF_CHECK(boolean_expression): Checks if the given boolean expression is true
  and, if not, records a failure for the test case but *execution continues*.
  Using ATF_REQUIRE aborts execution immediately after a failure.

* ATF_CHECK_EQ(expected_expression, actual_expression): Checks if the two
  expressions match and, if not, records a failure.  Similarly, ATF_REQUIRE_EQ
  aborts immediately if the check fails.

* ATF_CHECK_STREQ(expected_string, actual_string): Same as ATF_CHECK_EQ but
  performs string comparisons with strcmp.

* atf_tc_skip(const char *format, ...): Marks the test case as skipped with the
  provided reason and exits.

* atf_tc_fail(const char *format, ...): Marks the test case as failed with the
  provided reason and exits.

* atf_tc_pass(void): Explicitly marks the test case as passed.  This is
  *implied* when the test case function ends, so you should not use this in

* atf_expect_fail(const char *format, ...): Tells the atf runtime that the code
  following this call is expected to raise one or more failures (be it with
  atf_tc_fail, ATF_CHECK_*, etc.).  Use this to mark a block of code that is
  known to be broken (e.g. a test that reproduces a known bug).  Use the string
  parameter to provide an explanation about why the code is broken; if possible,
  provide a PR number.  Lastly, to terminate the "expected failure" code block
  and reset the runtime to the default functionality, use the atf_expect_pass()

* atf_expect_death(const char *format, ...): Same as atf_expect_fail but expects
  an abrupt termination of the test case, be it due to a call to exit() or to
  the reception of a signal.

* atf_expect_exit(int exitcode, const char *fomat, ...): Same as atf_expect_fail
  but expects the test case to exit with a specific exitcode.  Provide -1 to
  indicate any exit code.

* atf_expect_signal(int signo, const char *fomat, ...): Same as atf_expect_fail
  but expects the test case to receive a specific signal.  Provide -1 to
  indicate any signal.

* atf_expect_timeout(const char *reason, ...): Same as atf_expect_fail but
  expects the test case to get stuck and time out.

# Shell test programs

## Template

The following code snippet provides a shell test program with two test cases:

    atf_test_case my_test_case
    my_test_case_head() {
        atf_set "descr" "This test case ensures that..."
    my_test_case_body() {
        touch file1 file2

        cat >expout <<EOF
        atf_check -s eq:0 -o file:expout -e empty 'ls'

        atf_check_equal 4 $((2 + 2))

        if [ 'a' != 'b' ]; then
            atf_fail "Condition not met!"  # Explicit failure.

    atf_test_case another_test_case
    another_test_case_body() {
        # Do more tests...

    atf_init_test_cases() {
        atf_add_test_case my_test_case
        atf_add_test_case another_test_case

This program needs to be be executed with the atf-sh(1) interpreter as described
below.  The program automatically gains an entry point that provides a
consistent user interface to all test programs.  You are simply not inteded to
provide your own "main method", nor to deal with the command-line of the

## How to build

To build a shell test program, append the name of the test program (without the
.sh extension) to the TESTS_SH variable in the Makefile.

For example:

    .include <>


    TESTS_SH+= integration_test something_else_test

    .include <>

If you want to run the test program yourself, you should know that shell-based
test programs are processed with the atf-sh interpreter.  atf-sh is just a thin
wrapper over /bin/sh that loads the shared atf code and then delegates execution
to your source file.

## Common functions

The following functions are commonly used from within a test case body:

* atf_check: This is probably the most useful function for shell-based tests.
  It may need some experience to get it right, but it allows, in one line, to
  check the execution of a command.  Where check means: validate exit code,
  stdout and stderr.  This is just a wrapper over atf-check, so please refer to
  for more details.

* atf_check_equal value1 value2: Check that the two values are equal and, if
  not, abort execution.

* atf_expect_*: Same as their C counterparts; see above.

* atf_fail reason: Explicitly marks the test case as failed and aborts it.

* atf_skip reason: Explicitly marks the test case as skipped and exits.

* atf_pass: Explicitly markts the test case as passed and exits.

* atf_get_srcdir: Prints the path to the directory where the test case lives.
  Use as $(atf_get_srcdir)/my-static-data-file.


## How do I atfify a plain test program?

Let's suppose you have a program to exercise a particular piece of code.
Conceptually this implements a test but it does not use atf at all.  For

    #include <err.h>
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    /* This test program exercises the snprintf function. */

    int main(void)
        char buf[1024];

        printf("Testing integers");
        snprintf(buf, sizeof(buf), "%d", 3);
        if (strcmp(buf, "3") != 0)
            errx(EXIT_FAILURE, "%d failed");
        snprintf(buf, sizeof(buf), "a %d b", 5);
        if (strcmp(buf, "a 5 b") != 0)
            errx(EXIT_FAILURE, "%d failed");

        printf("Testing strings");
        snprintf(buf, sizeof(buf), "%s", "foo");
        if (strcmp(buf, "foo") != 0)
            errx(EXIT_FAILURE, "%s failed");
        snprintf(buf, sizeof(buf), "a %s b", "bar");
        if (strcmp(buf, "a bar b") != 0)
            errx(EXIT_FAILURE, "%s failed");

        return EXIT_SUCCESS;

To convert this program into an atf test program, use the template above and
keep this in mind:

* Split the whole main function into separate test cases.  In this scenario, the
  calls to printf(3) delimit a good granularity for the test cases: one for the
  integer formatter, one for the string formatter, etc.

* Use the ATF_CHECK* and/or atf_tc_fail functions to do the comparisons and
  report errors.  errx should not be used.

The result would look like:

    #include <atf-c.h>
    #include <stdio.h>

    ATF_TC(tc, integer_formatter);
    ATF_TC_HEAD(tc, integer_formatter)
        atf_tc_set_md_var(tc, "descr", "Validates the %d formatter");
    ATF_TC_BODY(tc, integer_formatter)
        char buf[1024];

        snprintf(buf, sizeof(1024), "%d", 3);
        ATF_CHECK_STREQ("3", buf);

        snprintf(buf, sizeof(1024), "a %d b", 5);
        ATF_CHECK_STREQ("a 5 b", buf);

    ATF_TC(tc, string_formatter);
    ATF_TC_HEAD(tc, string_formatter)
        atf_tc_set_md_var(tc, "descr", "Validates the %s formatter");
    ATF_TC_BODY(tc, string_formatter)
        char buf[1024];

        snprintf(buf, sizeof(1024), "%s", "foo");
        ATF_CHECK_STREQ("foo", buf);

        snprintf(buf, sizeof(1024), "a %s b", "bar");
        ATF_CHECK_STREQ("a bar b", buf);

        ATF_TP_ADD_TC(tp, integer_formatter);
        ATF_TP_ADD_TC(tp, string_formatter);

Which can later be invoked as any of:

    $ ./snprintf_test integer_formatter
    $ ./snprintf_test string_formatter
    $ atf-run snprintf_test | atf-report

## How do I write a test case for an unfixed PR?

Use the "expectations" mechanism to define part of the test case as faulty,
crashy, etc.  For example, suppose we have PR 1 that reports a condition in
which snprintf() does the wrong formatting when using %s, and PR 2 that mentions
that another snprintf() call using %d with number 5 causes a segfault.  We could

    #include <atf-c.h>
    #include <signal.h>
    #include <stdio.h>

    ATF_TC_WITHOUT_HEAD(tc, integer_formatter);
    ATF_TC_BODY(tc, integer_formatter)
        char buf[1024];

        snprintf(buf, sizeof(buf), "Hello %d\n", 1);
        ATF_CHECK_STREQ("Hello 1", buf);

        atf_tc_expect_signal(SIGSEGV, "PR 2: %%d with 5 causes a crash");
        snprintf(buf, sizeof(buf), "Hello %d\n", 5);
        ATF_CHECK_STREQ("Hello 5", buf);

    ATF_TC_WITHOUT_HEAD(tc, string_formatter);
    ATF_TC_BODY(tc, string_formatter)
        char buf[1024];

        snprintf(buf, sizeof(buf), "Hello %s\n", "world!");
        atf_tc_expect_failure("PR 1: %%s does not work");
        ATF_CHECK_STREQ("Hello world!", buf);

        ATF_TP_ADD_TC(tp, integer_formatter);
        ATF_TP_ADD_TC(tp, string_formatter);

## Do I need to remove temporary files?

No.  atf-run does this automatically for you, because it runs every test program
in its own temporary subdirectory.

CVSweb for NetBSD wikisrc <> software: FreeBSD-CVSweb