Expectations Sample

From STRIDE Wiki
Revision as of 17:06, 13 July 2010 by Mikee (talk | contribs) (Tests Description)
Jump to: navigation, search

Introduction

This example demonstrates a simple technique to monitor and test activity occurring in instrumented source code on the device from test logic implemented on the host. This sample show a common testing scenario - namely, verifying the behavior of a state machine.

If you are not familiar with test points you may find it helpful to review the Test Point article before proceeding.

Source under test

s2_expectations_source.c / h

These files implement a simple state machine that we wish to test. The state machine runs when Exp_DoStateChanges is executed, which is a function we have also instrumented so it can be remotely invoked from the host.

The expected state transitions are as follows:

eSTART -> eIDLE -> eACTIVE -> eIDLE -> eEND

The states don't do any work; instead they just sleep() so there's some time spent in each one.

Each state transition is managed through a call to SetNewState() which communicates the state transition to the test thread using the srTEST_POINT() macro. We also provide an incrementing counter value as data to each of these test points - this data is used for validation in one of our example scenarios. The source under test has also been instrumented with a test point to illustrate the use of JSON as a textual serialization format (which is easily decoded on the host).

Tests Description

s2_expectations_testmodule

This example implements four tests of the state machine implemented in s2_expectations_source. These tests demonstrate the use of the Perl Script APIs to validate expectations.

Each test follows the same pattern in preparing and using the test point feature:

  1. call TestPointSetup with order parameter as well as expected and unexpected lists.
  2. Invoke target processing by calling the remote function Exp_DoStateChanges.
  3. use Check or Wait on the test point object to process the expectations.

We create an "expectation" of activity and then validate the observed activity against the expectation using rules that we specify. If the expectation is met, the test passes; if the expectation is not met, the test fails.

sync_exact

This is test implements a basic ordered and strict expectation test (ordered/strict processing is the default behavior for TestPointSetup, so there is no need to explicitly specify these settings). An ordered/strict test expects the declared events to occur in the order specified and with no other intervening events from among those already declared in the list.

The test strictness normally applies to the universe of declared test points - that is, the union of all points declared in the expected and unexpected lists. However, for this example, we want to ensure that no other test points are encountered during the test. As such, we specify an unexpected list containing TEST_POINT_EVERYTHING_ELSE which makes this check entirely exclusive on the expected list.

sync_loose

This test relaxes some of the restrictions of the previous test by specifying unordered processing AND removing the unexpected list. As a result, this test simply validates that the declared test points occur with the specified frequency during the processing. This test does not validate the ordering of the events nor does it exclude other events that are outside the expectation universe (that is, test points NOT mentioned in the expectation list)

Note that the IDLE testpoint is now included in the expected array only once, but with an expected count of 2. This technique is common when using unordered processing.

This test will fail only if all of the expected testpoints are not seen (the specified number of times) during the processing window.

async_loose

This test is identical to the sync_loose test, except that we call Wait() and pass a timeout value to 200 milliseconds, which will result in a test failure, as it takes approximately 600 milliseconds for the testpoint expectations to be satisfied (the source under test includes artificial delays between each state transition).

This test is expected to fail

check_data

This test is identical to sync_loose, except that we use ordered processing for an ordered expectation set and we specify expected data for some of our test points. This test will pass only if the test points are seen in the specified order and match both the label and data specified.

For the test points with binary data, we have to use the perl pack() function to create a scalar value that has the proper bit pattern. Whenever you are validating target data, you will need to take into account byte ordering for basic types. Here we assume the target has the same byte ordering as the host. A more scalable approach to data validation from the target involves using string-based serialization formats (such as JSON).

trace_data

This test is similar to sync_exact, except the expectations are loaded from a expected data file that was created using the --trace option on the STRIDE Runner. We've also removed the unexpected list so the items are the trace file are not considered exclusively.

By default, tests that use a expect data file perform validation based only on the test point label. If you want data comparison to be performed, you need to specify a global predicate via the predicate option.

trace_data_validation

This test is identical to trace_data, but with the addition of data validation using the provided TestPointDefaultCmp predicate.

trace_data_custom_predicate

This test is identical to trace_data except that a custom predicate is specified for the data validation. The custom predicate in this example just validates binary data using the standard memory comparison and implicitly passes for any non-binary payloads.

json_data

This test demonstrates the use of JSON formatted string data from the target in predicate validation. The string payload for the test point is decoded using the standard perl JSON library and the object's fields are validated in the predicate function above.

non_strict_ordered

This test demonstrates the use of non-strict processing which allows other occurrences of test points within your current universe. In this test, there are other occurrences of the SET_NEW_STATE event that we have not explicitly specified in our expectation list. Had we been using strict processing (as is the default), these extra occurrences would have caused the test to fail. Because we have specified non-strict processing, the test passes.

unexpected

This test demonstrates the use of the unexpected list to ensure that specific test points are not hit during the check. In this case, we specify JSON_DATA in our unexpected list - however, since this test points does actually occur during our processing, the test fails.

This test is expected to fail

startup_predicate

This test demonstrates the use of the special TEST_POINT_ANYTHING specification to implement startup logic. Sometimes it is desirable to suspend processing of your expectation list until a certain event occurs. Startup predicate logic allows you to accomplish this, as this example demonstrates. In this somewhat contrived example, we run our event scenario twice consecutively. The startup logic waits for the END event to occur. Once it does, the predicate returns true and the remaining items are processed in turn.

anything

Similar to the startup_predicate test, this example demonstrates the use of TEST_POINT_ANYTHING in the middle of an expectation list to suspend further processing of the list until a condition is satisfied. The proceed condition is enforced by a predicate, this any expectation entries using TEST_POINT_ANYTHING must include a predicate to validate the condition. In this example, the TEST_POINT_ANYTHING suspends processing of the list until it sees the IDLE event.

predicate_validation

This test shows that you can use a custom predicate along with TEST_POINT_ANY_AT_ALL. In this way, you can force all validation to be handled in your predicate. The predicate should return 0 to fail the test, 1 to pass the test, or TEST_POINT_IGNORE to continue processing other test points.

This technique should only be used in cases where the other more explicit expectation list techniques are not sufficient.

multiple_handlers

This is test shows how you can use multiple test point setups simultaneously with each processing a different set of expectations. This can be a powerful technique and often produces more understandable test scenarios, particularly when you decompose your expetations down into small, easy to digest expectation lists.

In this specific case, we create one handler that expects START and END, in sequence and another that just verifies that the IDLE event happens twice. We deliberately create both handlers before starting the processing so that each handler will have access to the stream of events as they happen during processing.