Test Point Sample

From STRIDE Wiki
Jump to: navigation, search

Introduction

This examples demonstrate a simple technique to monitor and test activity occurring in another thread. The samples show a common testing scenario of this type: where we want to verify the behavior of a state machine.

if you are not familiar with test points you may find it helpful to review the Test Point article before proceeding.

Source under test

s2_testpoint_source.c / h

These files implement a simple state machine that we wish to test. The state machine runs in its own thread, and starts when the thread function StateControllerTask is executed.

The expected state transitions are as follows:

eSTART -> eIDLE -> eACTIVE -> eIDLE -> eEND

The states don't do any work; instead they just sleep() so there's some time spent in each one.

Each state transition is managed through a call to SetNewState() which communicates the state transition to the test thread using the srTEST_POINT() macro. We set the macro argument to the name of the state we are transitioning to as this is the 'value' of the testpoint that will be received by the test thread.

Tests Description

s2_testpoint_basic

This example implements three tests of the state machine implemented in s2_testpoint_source. These tests demonstrate the use of srTestPointWait() to verify activity in another thread and srTestPointCheck() to verify an already completed activity.

Each test follows the same pattern in preparing and using the test point feature:

  1. specify the set of test points of interest - create an array of type srTestPointExpect_t which specifies the expected test points and optionally an array of type srTestPointUnexpect_t for unexpected
  2. set the expectation array using srTestPointSetup()
  3. start the state machine
  4. use srTestPointCheck() or srTestPointWait() macro to validate the expected test points

We create an "expectation" of activity and then validate the observed activity against the expectation using rules that we specify. If the expectation is met, the test passes; if the expectation is not met, the test fails.

The main difference between the tests is the values of the parameters provided to each test's validation API.

SyncExact

Here we verify an exact match between the contents of the expected array and the observed testpoints. The combination of srTEST_POINT_EXPECT_ORDERED and an unexpected list specifies that the test will pass only if:

  • only the testpoints in the expected array are seen, and
  • the testpoints are seen in the order specified

SyncLooseTimed

Here we loosen the restrictions of the exact test. By specifiying srTEST_POINT_EXPECT_UNORDERED and empty unexpected list, we now will:

  • ignore any testpoints seen that aren't in the expected array. and
  • disregard the order in which the testpoints are received

Note that the "IDLE" testpoint is now included in the expected array only once, but with an expected count of 2.

The srTestPointCheck() will now cause the test to fail only if all of the expected testpoints are not seen (the specified number of times) within the timeout period.

AsyncLooseTimed

This test is identical to the SyncLooseTimed test, except that we call srTestPointWait() and pass a timeout value to 400 milliseconds, which will result in a test failure, as it takes approximately 600 milliseconds for the testpoint expectations to be satisfied.

CheckData

This test is identical to SyncLooseTimed, except that we specify srTEST_POINT_EXPECT_ORDERED for an ordered expectation set and we specify expected data for some of our test points. This test will pass only if the test points are seen in the specified order and match both the label and data specified.

CheckBinaryData

This test demonstrates the validation of a test point with a binary payload using a predicate function. A structure is used as the test point payload, and a predicate is written to validate one or more fields in the resulting structure payload.

Run Tests

Now launch the test app (if you have not already) and execute the runner with the following commands:

Test Point tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testpoint_basic" --log_level=all --output=TestPoint.xml

Note the command will produce distinct result files for the run (per the --output command above). Please use the result file to peruse the results by opening each in your browser.

Observations

This sample demonstrates how to do tests that validate STRIDE Test Points - with native test code. Although test point validation tests can be written in host-based scripting languages as well, sometimes it's preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description.

The following are some test observations:

  • the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the --log_level=all option when executing the runner).
  • test point tests can be packaged into a harness using any of the three types of test units that we support. In this case, we used an FList so that the sample could be used on systems that were not C++ capable.
  • one of two methods is used to process the test: srTestPointWait or srTestPointCheck. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).
  • due to the limitations of C syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the CheckData example, for instance).