File:  [NetBSD Developer Wiki] / wikisrc / tutorials / atf.mdwn
Revision 1.18: download - view: text, annotated - select for diffs
Wed Sep 9 14:28:56 2020 UTC (2 months, 2 weeks ago) by kim
Branches: MAIN
CVS tags: HEAD
Use man template

    1: [[!meta title="Creating atf-based tests for NetBSD src"]]
    2: 
    3: **Contents**
    4: 
    5: [[!toc ]]
    6: 
    7: # Introduction
    8: 
    9: This quick tutorial provides a guideline on how to start creating new test
   10: programs and/or test cases, how these tests are tied to the NetBSD source tree
   11: and includes a short reference of the most commonly used functions.
   12: 
   13: You should start by reading the
   14: [[!template id=man name="tests" section="7"]] manual
   15: page, which provides a user-level overview on how to run the tests included in
   16: NetBSD.  While reading this tutorial, you may also want to refer to these pages
   17: on a need-to-know basis:
   18: [[!template id=man name="atf-run" section="1"]],
   19: [[!template id=man name="atf-report" section="1"]],
   20: [[!template id=man name="atf-test-program" section="1"]],
   21: [[!template id=man name="atf-c-api" section="3"]],
   22: [[!template id=man name="atf-sh-api" section="3"]]
   23: and
   24: [[!template id=man name="atf-check" section="1"]].
   25: 
   26: **IMPORTANT: Do not take anything for granted, ESPECIALLY if you have previously
   27: worked with and/or have seen src/regress/.  Your assumptions are most likely
   28: incorrect.**
   29: 
   30: # Test programs vs. test cases
   31: 
   32: So, what is what and how do you organize your tests?
   33: 
   34: A **test case** is a piece of code that exercises a particular functionality of
   35: another piece of code.  Commonly, test cases validate the outcome of a
   36: particular source function or class method, the validity of the execution of a
   37: command with a particular combination of flags/arguments, etc.  Test cases are
   38: supposed to be very concise, in the sense that they should just be testing *one
   39: behavior*.
   40: 
   41: A **test program** is a binary that collects and exposes a group of test cases.
   42: Typically, these test programs expose conceptually-related tests or all the
   43: tests for a particular source file.
   44: 
   45: In general, having many test programs with **just one test case** in them is
   46: **wrong** and smells from the previous layout of src/regress/.  Think about some
   47: other organization.  And don't blame atf for this separation: this is extremely
   48: common in (almost?) all other test frameworks and, when used wisely, becomes an
   49: invaluable classification.
   50: 
   51: For example, suppose you have the following fictitious source files for the ls
   52: tool:
   53: 
   54: * bin/ls/fs.c: Provides the list_files() and stat_files() functions.
   55: 
   56: * bin/ls/ui.c: Provides the format_columns() function.
   57: 
   58: * bin/ls/main.c: The main method for ls.
   59: 
   60: Then, you could define the following test programs and test cases:
   61: 
   62: * bin/ls/fs_test.c: Provides test cases for list_files and stat_files.  These
   63:   would be named list_files\_\_empty_directory, list_files\_\_one_file,
   64:   list_files\_\_multiple_files, stat_files\_\_directory, stat_files\_\_symlink, etc.
   65: 
   66: * bin/ls/ui_test.c: Provides test cases for the format_columns function.  These
   67:   would be named format_columns\_\_no_files, format_columns\_\_multiple_files, etc.
   68: 
   69: * bin/ls/integration_test.sh: Provides "black box" test cases for the binary
   70:   itself.  These would be named lflag, lflag_and_Fflag, no_flags, no_files, etc.
   71: 
   72: Try to keep your test case names as descriptive as possible so that they do not
   73: require comments to explain what they intend to test.
   74: 
   75: # Test case parts
   76: 
   77: ## The head
   78: 
   79: The *head* is used **for the sole purpose** to define meta-data properties for
   80: the test case.  (Eventually, this would not be specified programmatically, but
   81: is how we deal with the information right now.)
   82: 
   83: The following properties are commonly useful:
   84: 
   85: * descr: A textual description of the purpose of the test case.
   86: 
   87: * require.user: Set to 'root' to tell the atf runtime that this test requires
   88:   root privileges.  The test will later be skipped if you are running atf as
   89:   non-root, and the test will be executed otherwise.  Please do not use this
   90:   unless absolutely necessary!  You can most likely make your tests run as a
   91:   regular user if you use puffs and rump.
   92: 
   93: * use.fs: Set to 'true' if the test case creates temporary files in the "current
   94:   directory".  If set to false, the atf runtime will set the "current directory"
   95:   to an unwritable directory, which will disallow the creation of the temporary
   96:   files and will make your test mysteriously fail.
   97: 
   98: ## The body
   99: 
  100: The *body* is the actual meat of the test case.  This is just a regular function
  101: that executes any code you want and calls special atf functions to report
  102: failures; see below.
  103: 
  104: In particular, be aware that the atf run-time **isolates** the execution of
  105: every test case to prevent side-effects (such as temporary file leftovers,
  106: in-memory data corruption, etc.).  In particular:
  107: 
  108: * A test case is **always executed as a subprocess** that is separate from the
  109:   head.  This implies that you cannot pass any in-memory state between the
  110:   parts.
  111: 
  112: * The current working directory of a test case is changed to a temporary
  113:   location that gets cleaned up later on automatically.  (Set the use.fs
  114:   property to true in the head if you need to write to this temporary
  115:   directory.)
  116: 
  117: * The environment of the test case is "sanitized" to get rid of variables that
  118:   can cause side-effects; e.g. LC_ALL, TZ, etc.
  119: 
  120: # Running the test programs
  121: 
  122: Do:
  123: 
  124:     $ cd /usr/tests/
  125:     $ atf-run | atf-report
  126: 
  127: Why?
  128: 
  129: Test programs get installed into the /usr/tests/ hierarchy.  The main reason for
  130: doing that is to allow *any* user to test his system and to be able to convince
  131: himself that everything is working correctly.
  132: 
  133: Imagine that you install NetBSD-current on a public-facing machine that has some
  134: particular hardware only supported in the bleeding-edge source tree.  In this
  135: scenario, you, as the administrator, could just go into /usr/tests/, run the
  136: tests and know immediately if everything is working correctly in your
  137: software+hardware combination or not.  No need to rely on promises from the
  138: vendor, no need to deal with a source tree, no need to have a compiler
  139: installed...
  140: 
  141: So, that's the theory.  Now, how does this map to our source tree?
  142: 
  143: At the moment, the source test programs are located somewhere under src/tests/.
  144: Say, for example, that you have the src/tests/bin/ls/ui_test.c source file.
  145: This Makefile in src/tests/bin/ls/ will take this source file and generate a
  146: ui_test binary.  The Makefile will also generate an Atffile.  Both files (the
  147: ui_test binary and the Atffile) will later be installed to /usr/tests/bin/ls/
  148: 
  149: ## Executing a single test
  150: 
  151: In general, you **do not want to run a test program by hand**.  If you do so,
  152: you do not take advantage of any of the isolation provided by the atf runtime.
  153: This means that the test program will probably leave some temporary files behind
  154: or will raise some false negatives.
  155: 
  156: To run a test, use atf-run.  In general:
  157: 
  158:     $ atf-run | atf-report  # To run all the test programs in a directory.
  159:     $ atf-run some_test | atf-report  # To run only the some_test program.
  160: 
  161: The only "legitimate" case in which you should be running test cases by hand is
  162: to debug them:
  163: 
  164:     $ gdb --args ./some_test the_broken_test_case
  165: 
  166: ... but be sure to clean up any leftover files if you do that.
  167: 
  168: ## Executing tests during development
  169: 
  170: When you are in a subdirectory of src/tests/, you can generally run "make test"
  171: to execute the tests of that particular subdirectory.  This assumes that the
  172: tests have been installed into the destdir.
  173: 
  174: Please note that this is only provided for convenience but it is completely
  175: unsupported.  Tests run this way may fail mysteriously, and that is perfectly
  176: fine as long as they work from their canonical locations in /usr/tests.
  177: 
  178: # Adding a new test
  179: 
  180: To add a new *test case* to the source tree, look for any test program in
  181: src/tests/ that can assimilate it.  If you find such a program, just add the
  182: test case to it: no other changes are required so your life is easy.  Otherwise,
  183: you will have to create a new test program.
  184: 
  185: To add a new *test program* to the source tree:
  186: 
  187: 1. Locate the appropriate subdirectory in which to put your test program.  It is
  188: OK (and **expected**) to have multiple test programs into the same directory.
  189: **Restrain yourself from creating one directory per test program.**
  190: 
  191: If the subdirectory exists:
  192: 
  193: 1. Choose a sane name for the test program; the name must not be so specific
  194:    that it restricts the addition of future test cases into it.
  195: 
  196: 1. Create the test program source file using one of the templates below.
  197:    E.g. src/tests/tutorial/sample_test.c.
  198: 
  199: 1. Add the new test program to the Makefile.
  200: 
  201: If the subdirectory does not exist:
  202: 
  203: 1. Do the same as above.
  204: 
  205: 1. Create the Makefile for the directory using the templates below.
  206: 
  207: 1. Edit the parent Makefile to recurse into the new subdirectory.
  208: 
  209: 1. Edit src/etc/mtree/NetBSD.dist.tests to register the new subdirectory.  Your
  210:    test will be installed under /usr/tests/.
  211: 
  212: 1. Edit src/distrib/sets/lists/tests/mi to register the new test program.  Do
  213:    not forget to add .debug entries if your test program is a C/C++ binary.
  214: 
  215: ## Makefile template
  216: 
  217: Follow this template to create your Makefile:
  218: 
  219:     .include <bsd.own.mk>
  220: 
  221:     # This must always be defined.
  222:     TESTSDIR= ${TESTSBASE}/bin/ls
  223: 
  224:     # These correspond to the test programs you have in the directory.
  225:     TESTS_C+= c1_test c2_test  # Correspond to c1_test.c and c2_test.c.
  226:     TESTS_SH+= sh1_test sh2_test  # Correspond to sh1_test.c and sh2_test.c
  227: 
  228:     # Define only if your tests need any data files.
  229:     FILESDIR= ${TESTSDIR}
  230:     FILES= testdata1.txt testdata2.bin  # Any necessary data files.
  231: 
  232:     .include <bsd.test.mk>
  233: 
  234: ## Atffile template
  235: 
  236: *Atffiles are automatically generated by bsd.test.mk, so in general you will not
  237: have to deal with them.*
  238: 
  239: What is an Atffile?  An Atffile is the atf-run counterpart of a "Makefile".
  240: Given that atf tests *do not rely on a toolchain*, they cannot use make(1) to
  241: script their execution as the old tests in src/regress/ did.
  242: 
  243: The Atffiles, in general, just provide a list of test programs in a particular
  244: directory and the list of the subdirectories to descend into.
  245: 
  246: If you have to provide an Atffile explicitly because the automatic generation
  247: does not suit your needs, follow this format:
  248: 
  249:     Content-Type: application/X-atf-atffile; version="1"
  250: 
  251:     prop: test-suite = NetBSD
  252: 
  253:     tp: first_test
  254:     tp: second_test
  255:     tp-glob: optional_*_test
  256:     tp: subdir1
  257:     tp: subdir2
  258: 
  259: # C test programs
  260: 
  261: ## Template
  262: 
  263: The following code snippet provides a C test program with two test cases.  The
  264: specific details as to how this works follow later:
  265: 
  266:     #include <atf-c.h>
  267: 
  268:     ATF_TC(tc, my_test_case);
  269:     ATF_TC_HEAD(tc, my_test_case)
  270:     {
  271:         atf_tc_set_md_var(tc, "descr", "This test case ensures that...");
  272:     }
  273:     ATF_TC_BODY(tc, my_test_case)
  274:     {
  275:         ATF_CHECK(true); /* Success; continue execution. */
  276:         ATF_CHECK(false); /* Failure; continue execution. */
  277: 
  278:         ATF_CHECK_EQ(5, 2 + 2); /* Failure; continue execution. */
  279:         ATF_REQUIRE_EQ(5, 2 + 2); /* Failure; abort execution. */
  280: 
  281:         if (!condition)
  282:             atf_tc_fail("Condition not met!"); /* Abort execution. */
  283:     }
  284: 
  285:     ATF_TC(tc, another_test_case);
  286:     ATF_TC_HEAD(tc, another_test_case)
  287:     {
  288:         atf_tc_set_md_var(tc, "descr", "This test case ensures that...");
  289:     }
  290:     ATF_TC_BODY(tc, another_test_case)
  291:     {
  292:         /* Do more tests here... */
  293:     }
  294: 
  295:     ATF_TP_ADD_TCS(tp)
  296:     {
  297:         ATF_TP_ADD_TC(tp, my_test_case);
  298:         ATF_TP_ADD_TC(tp, another_test_case);
  299:     }
  300: 
  301: This program needs to be built with the Makefile shown below.  Once built, the
  302: program automatically gains a main() method that provides a consistent user
  303: interface to all test programs.  You are simply not intended to provide your own
  304: main method, nor to deal with the command-line of the invocation.
  305: 
  306: ## How to build
  307: 
  308: To build a C test program, append the name of the test program (without the .c
  309: extension) to the TESTS_C variable in the Makefile.
  310: 
  311: For example:
  312: 
  313:     .include <bsd.own.mk>
  314: 
  315:     TESTSDIR= ${TESTSBASE}/bin/ls
  316: 
  317:     TESTS_C+= fs_test ui_test
  318: 
  319:     .include <bsd.test.mk>
  320: 
  321: ## Common functions
  322: 
  323: The following functions are commonly used from within a test case body:
  324: 
  325: * ATF_REQUIRE(boolean_expression): Checks if the given boolean expression is
  326:   true and, if not, aborts execution and marks the test as failed.  Similarly
  327:   ATF_CHECK performs the same test but does not abort execution: it records the
  328:   failure but keeps processing the test case.  For an explanation on when to use
  329:   which, refer to the FAQ question below.
  330: 
  331: * ATF_REQUIRE_EQ(expected_expression, actual_expression): Checks if the two
  332:   expressions match and, if not, aborts marking the test as failed.  Similarly,
  333:   ATF_CHECK_EQ records the error but does not abort execution.
  334: 
  335: * ATF_REQUIRE_STREQ(expected_string, actual_string): Same as ATF_REQUIRE_EQ but
  336:   performs string comparisons with strcmp.
  337: 
  338: * atf_tc_skip(const char *format, ...): Marks the test case as skipped with the
  339:   provided reason and exits.
  340: 
  341: * atf_tc_fail(const char *format, ...): Marks the test case as failed with the
  342:   provided reason and exits.
  343: 
  344: * atf_tc_pass(void): Explicitly marks the test case as passed.  This is
  345:   *implied* when the test case function ends, so you should not use this in
  346:   general.
  347: 
  348: * atf_expect_fail(const char *format, ...): Tells the atf runtime that the code
  349:   following this call is expected to raise one or more failures (be it with
  350:   atf_tc_fail, ATF_REQUIRE_*, etc.).  Use this to mark a block of code that is
  351:   known to be broken (e.g. a test that reproduces a known bug).  Use the string
  352:   parameter to provide an explanation about why the code is broken; if possible,
  353:   provide a PR number.  Lastly, to terminate the "expected failure" code block
  354:   and reset the runtime to the default functionality, use the atf_expect_pass()
  355:   function.
  356: 
  357: * atf_expect_death(const char *format, ...): Same as atf_expect_fail but expects
  358:   an abrupt termination of the test case, be it due to a call to exit() or to
  359:   the reception of a signal.
  360: 
  361: * atf_expect_exit(int exitcode, const char *format, ...): Same as atf_expect_fail
  362:   but expects the test case to exit with a specific exitcode.  Provide -1 to
  363:   indicate any exit code.
  364: 
  365: * atf_expect_signal(int signo, const char *format, ...): Same as atf_expect_fail
  366:   but expects the test case to receive a specific signal.  Provide -1 to
  367:   indicate any signal.
  368: 
  369: * atf_expect_timeout(const char *reason, ...): Same as atf_expect_fail but
  370:   expects the test case to get stuck and time out.
  371: 
  372: * atf_tc_get_config_var("srcdir"): Returns the path to the directory containing
  373:   the test program binary.  This must be used to locate any data/auxiliary files
  374:   stored alongside the binary.
  375: 
  376: * RL(integer_expression, integer): Used to evaluate a call to a libc function
  377:   that updates errno when it returns an error and to provide correct error
  378:   reporting.  The integer expression is the call to such function, and the
  379:   literal integer provides the expected return value when there is an error.
  380:   For example: RL(open("foo", O_RDONLY), -1).  This would fail the test case if
  381:   open returns -1, and would record the correct error message returned by libc.
  382: 
  383: # Shell test programs
  384: 
  385: ## Template
  386: 
  387: The following code snippet provides a shell test program with two test cases.
  388: The details on how this works are provided later:
  389: 
  390:     atf_test_case my_test_case
  391:     my_test_case_head() {
  392:         atf_set "descr" "This test case ensures that..."
  393:     }
  394:     my_test_case_body() {
  395:         touch file1 file2
  396: 
  397:         cat >expout <<EOF
  398:     file1
  399:     file2
  400:     EOF
  401:         # The following call validates that the 'ls' command returns an
  402:         # exit code of 0, that its stdout matches exactly the contents
  403:         # previously stored in the 'expout' file and that its stderr is
  404:         # completely empty.  See atf-check(1) for details, which is the
  405:         # auxiliary tool invoked by the atf_check wrapper function.
  406:         atf_check -s eq:0 -o file:expout -e empty 'ls'
  407: 
  408:         atf_check_equal 4 $((2 + 2))
  409: 
  410:         if [ 'a' != 'b' ]; then
  411:             atf_fail "Condition not met!"  # Explicit failure.
  412:         fi
  413:     }
  414: 
  415:     atf_test_case another_test_case
  416:     another_test_case_head() {
  417:         atf_set "descr" "This test case ensures that..."
  418:     }
  419:     another_test_case_body() {
  420:         # Do more tests...
  421:     }
  422: 
  423:     atf_init_test_cases() {
  424:         atf_add_test_case my_test_case
  425:         atf_add_test_case another_test_case
  426:     }
  427: 
  428: This program needs to be built with the Makefile shown below.  The program
  429: automatically gains an entry point that provides a consistent user interface to
  430: all test programs.  You are simply not intended to provide your own "main
  431: method", nor to deal with the command-line of the invocation.
  432: 
  433: ## How to build
  434: 
  435: To build a shell test program, append the name of the test program (without the
  436: .sh extension) to the TESTS_SH variable in the Makefile.
  437: 
  438: For example:
  439: 
  440:     .include <bsd.own.mk>
  441: 
  442:     TESTSDIR= ${TESTSBASE}/bin/ls
  443: 
  444:     TESTS_SH+= integration_test something_else_test
  445: 
  446:     .include <bsd.test.mk>
  447: 
  448: If you want to run the test program yourself, you should know that shell-based
  449: test programs are processed with the atf-sh interpreter.  atf-sh is just a thin
  450: wrapper over /bin/sh that loads the shared atf code and then delegates execution
  451: to your source file.
  452: 
  453: ## Common functions
  454: 
  455: The following functions are commonly used from within a test case body:
  456: 
  457: * atf_check: This is probably the most useful function for shell-based tests.
  458:   It may need some experience to get it right, but it allows, in one line, to
  459:   check the execution of a command.  Where check means: validate exit code,
  460:   stdout and stderr.  This is just a wrapper over atf-check, so please refer to
  461:   [[!template id=man name="atf-check" section="1"]]
  462:   for more details.
  463: 
  464: * atf_check_equal value1 value2: Check that the two values are equal and, if
  465:   not, abort execution.
  466: 
  467: * atf_expect_*: Same as their C counterparts; see above.
  468: 
  469: * atf_fail reason: Explicitly marks the test case as failed and aborts it.
  470: 
  471: * atf_skip reason: Explicitly marks the test case as skipped and exits.
  472: 
  473: * atf_pass: Explicitly marks the test case as passed and exits.
  474: 
  475: * atf_get_srcdir: Prints the path to the directory where the test case lives.
  476:   Use as $(atf_get_srcdir)/my-static-data-file.
  477: 
  478: # FAQ
  479: 
  480: ## How do I atfify a plain test program?
  481: 
  482: Let's suppose you have a program to exercise a particular piece of code.
  483: Conceptually this implements a test but it does not use atf at all.  For
  484: example:
  485: 
  486:     #include <err.h>
  487:     #include <stdio.h>
  488:     #include <stdlib.h>
  489:     #include <string.h>
  490: 
  491:     /* This test program exercises the snprintf function. */
  492: 
  493:     int main(void)
  494:     {
  495:         char buf[1024];
  496: 
  497:         printf("Testing integers");
  498:         snprintf(buf, sizeof(buf), "%d", 3);
  499:         if (strcmp(buf, "3") != 0)
  500:             errx(EXIT_FAILURE, "%d failed");
  501:         snprintf(buf, sizeof(buf), "a %d b", 5);
  502:         if (strcmp(buf, "a 5 b") != 0)
  503:             errx(EXIT_FAILURE, "%d failed");
  504: 
  505:         printf("Testing strings");
  506:         snprintf(buf, sizeof(buf), "%s", "foo");
  507:         if (strcmp(buf, "foo") != 0)
  508:             errx(EXIT_FAILURE, "%s failed");
  509:         snprintf(buf, sizeof(buf), "a %s b", "bar");
  510:         if (strcmp(buf, "a bar b") != 0)
  511:             errx(EXIT_FAILURE, "%s failed");
  512: 
  513:         return EXIT_SUCCESS;
  514:     }
  515: 
  516: To convert this program into an atf test program, use the template above and
  517: keep this in mind:
  518: 
  519: * Split the whole main function into separate test cases.  In this scenario, the
  520:   calls to printf(3) delimit a good granularity for the test cases: one for the
  521:   integer formatter, one for the string formatter, etc.
  522: 
  523: * Use the ATF_CHECK* and/or atf_tc_fail functions to do the comparisons and
  524:   report errors.  Neither errx nor any other error reporting and program
  525:   termination functions (read: err, errx, warn, warnx, exit, abort) are to be
  526:   used at all.
  527: 
  528: The result would look like:
  529: 
  530:     #include <atf-c.h>
  531:     #include <stdio.h>
  532: 
  533:     ATF_TC(tc, integer_formatter);
  534:     ATF_TC_HEAD(tc, integer_formatter)
  535:     {
  536:         atf_tc_set_md_var(tc, "descr", "Validates the %d formatter");
  537:     }
  538:     ATF_TC_BODY(tc, integer_formatter)
  539:     {
  540:         char buf[1024];
  541: 
  542:         snprintf(buf, sizeof(1024), "%d", 3);
  543:         ATF_CHECK_STREQ("3", buf);
  544: 
  545:         snprintf(buf, sizeof(1024), "a %d b", 5);
  546:         ATF_CHECK_STREQ("a 5 b", buf);
  547:     }
  548: 
  549:     ATF_TC(tc, string_formatter);
  550:     ATF_TC_HEAD(tc, string_formatter)
  551:     {
  552:         atf_tc_set_md_var(tc, "descr", "Validates the %s formatter");
  553:     }
  554:     ATF_TC_BODY(tc, string_formatter)
  555:     {
  556:         char buf[1024];
  557: 
  558:         snprintf(buf, sizeof(1024), "%s", "foo");
  559:         ATF_CHECK_STREQ("foo", buf);
  560: 
  561:         snprintf(buf, sizeof(1024), "a %s b", "bar");
  562:         ATF_CHECK_STREQ("a bar b", buf);
  563:     }
  564: 
  565:     ATF_TP_ADD_TCS(tp)
  566:     {
  567:         ATF_TP_ADD_TC(tp, integer_formatter);
  568:         ATF_TP_ADD_TC(tp, string_formatter);
  569:     }
  570: 
  571: Which can later be invoked as any of:
  572: 
  573:     $ atf-run snprintf_test | atf-report  # Normal execution method.
  574:     $ ./snprintf_test integer_formatter  # For DEBUGGING only.
  575:     $ ./snprintf_test string_formatter  # For DEBUGGING only.
  576: 
  577: ## How do I write a test case for an unfixed PR?
  578: 
  579: Use the "expectations" mechanism to define part of the test case as faulty,
  580: crashy, etc.  This is for two reasons:
  581: 
  582: * As long as the bug still exists, the test case will be reported as an
  583:   "expected failure".  Such expected failures do not count towards the success
  584:   or failure of the whole test suite.
  585: 
  586: * When the bug gets fixed, the bug will not trigger any more in the test case,
  587:   and thus the expectation of failure will not be met any more.  At this point
  588:   the test case will start raising a regular failure, which is usually addressed
  589:   by just removing the expect_* calls (but add a comment with the PR number!).
  590: 
  591: For example, suppose we have PR lib/1 that reports a condition in which
  592: snprintf() does the wrong formatting when using %s, and PR lib/2 that mentions
  593: that another snprintf() call using %d with number 5 causes a segfault.  We could
  594: do:
  595: 
  596:     #include <atf-c.h>
  597:     #include <signal.h>
  598:     #include <stdio.h>
  599: 
  600:     ATF_TC(tc, integer_formatter);
  601:     ATF_TC_HEAD(tc, integer_formatter)
  602:     {
  603:         atf_tc_set_md_var(tc, "descr", "Tests the %d formatter for snprintf");
  604:     }
  605:     ATF_TC_BODY(tc, integer_formatter)
  606:     {
  607:         char buf[1024];
  608: 
  609:         snprintf(buf, sizeof(buf), "Hello %d\n", 1);
  610:         ATF_CHECK_STREQ("Hello 1", buf);
  611: 
  612:         atf_tc_expect_signal(SIGSEGV, "PR lib/2: %%d with 5 causes a crash");
  613:         snprintf(buf, sizeof(buf), "Hello %d\n", 5);
  614:         atf_tc_expect_pass();
  615:         ATF_CHECK_STREQ("Hello 5", buf);
  616:     }
  617: 
  618:     ATF_TC(tc, string_formatter);
  619:     ATF_TC_HEAD(tc, string_formatter)
  620:     {
  621:         atf_tc_set_md_var(tc, "descr", "Tests the %s formatter for snprintf");
  622:     }
  623:     ATF_TC_BODY(tc, string_formatter)
  624:     {
  625:         char buf[1024];
  626: 
  627:         snprintf(buf, sizeof(buf), "Hello %s\n", "world!");
  628:         atf_tc_expect_failure("PR lib/1: %%s does not work");
  629:         ATF_CHECK_STREQ("Hello world!", buf);
  630:         atf_tc_expect_pass();
  631:     }
  632: 
  633:     ATF_TP_ADD_TCS(tp)
  634:     {
  635:         ATF_TP_ADD_TC(tp, integer_formatter);
  636:         ATF_TP_ADD_TC(tp, string_formatter);
  637:     }
  638: 
  639: ## Do I need to remove temporary files?
  640: 
  641: No.  atf-run does this automatically for you, because it runs every test program
  642: in its own temporary subdirectory.
  643: 
  644: ## When do I use ATF_CHECK and when ATF_REQUIRE?
  645: 
  646: ATF_CHECK logs errors but does not abort the execution of the test program.
  647: ATF_REQUIRE logs errors in a similar way but immediately terminates the
  648: execution.
  649: 
  650: You can use this distinction in the following way: use ATF_REQUIRE to check the
  651: code that "prepares" your test case.  Use ATF_CHECK to do the actual
  652: functionality tests once all the set up has been performed.  For example:
  653: 
  654:     ATF_TC_BODY(getline) {
  655:         FILE *f;
  656:         char buf[1024];
  657: 
  658:         /* Opening the file is not part of the functionality under test, but it
  659:          * must succeed before we actually test the relevant code. */
  660:         ATF_REQUIRE((f = fopen("foo")) != NULL);
  661: 
  662:         ATF_CHECK(getline(f, buf, sizeof(buf)) > 0);
  663:         ATF_CHECK_STREQ("line 1", buf);
  664: 
  665:         ATF_CHECK(getline(f, buf, sizeof(buf)) > 0);
  666:         ATF_CHECK_STREQ("line 2", buf);
  667:     }

CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb