Training Running Tests

From STRIDE Wiki
Revision as of 12:36, 1 June 2010 by Mikee (talk | contribs)
Jump to: navigation, search

Background

Device connection and STRIDE test execution is handled by the STRIDE Runner (aka "the runner"). The runner is a command line tool with a number of options for listing runnable items, executing tests, tracing on test points, and uploading your results. The runner is designed for use in both ad-hoc execution of tests and fully automated CI execution. You've already seen some basic test execution scenarios in the other training sections - now we will look explicitly at several of the most common use-cases for the runner.

Please review the following reference articles before proceeding:

Build a test app

Let's begin by building an off-target test app to use for these examples. The sources we want to include in this app are test_in_script/Expectations and test_in_c_cpp/TestClass. Copy these source files to your sample_src directory and follow these instructions for building

Listing items

Listing the contents of database is an easy way to see what test units and fixturing functions are available for execution. Open a command line and try the following[1]:

stride --database="../out/TestApp.sidb" --list

You should see output something like this:

Functions
  Exp_DoStateChanges()
Test Units
  s2_testclass::Basic::Exceptions()
  s2_testclass::Basic::Fixtures()
  s2_testclass::Basic::Parameterized(char const * szString, unsigned int uExpectedLen)
  s2_testclass::Basic::Simple()
  s2_testclass::RuntimeServices::Dynamic()
  s2_testclass::RuntimeServices::Override()
  s2_testclass::RuntimeServices::Simple()
  s2_testclass::RuntimeServices::VarComment()
  s2_testclass::srTest::Dynamic()
  s2_testclass::srTest::Simple()

A few things to notice:

  • The Functions (if any) are listed before the Test Units.
  • Function and Test Unit arguments (input params) will be shown, if any. The parameter types are described shown for each.

Tracing on test points

Tracing using the runner will show any STRIDE Test Points that are generated on the device during the window of time that the runner is connected. If you have test points that are continuously being emitted (for instance, in some background thread), then you can just connect to the device with tracing enabled to see them (you'll need to specify a --trace_timeout parameter to tell the runner how long to trace for). If your test points require some fixturing to be hit, then you'll need to specify a script to execute that makes the necessary fixture calls. This is precisely what we did in our previous Instrumentation training. If you recall from that training, we did the following[1]:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run=do_state_changes.pl --trace 

You should see output resembling this:

Loading database...
Connecting to device...
Executing...
  script "C:\s2\seaside\SDK\Windows\src\do_state_changes.pl"
1032564500 POINT "SET_NEW_STATE" - START [../sample_src/s2_expectations_source.c:49]
1032564501 POINT "START" [../sample_src/s2_expectations_source.c:63]
1032574600 POINT "SET_NEW_STATE" - IDLE [../sample_src/s2_expectations_source.c:49]
1032574601 POINT "IDLE" - 02 00 00 00 [../sample_src/s2_expectations_source.c:78]
1032584600 POINT "SET_NEW_STATE" - ACTIVE [../sample_src/s2_expectations_source.c:49]
1032584601 POINT "ACTIVE" - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]
1032594700 POINT "ACTIVE Previous State" - IDLE [../sample_src/s2_expectations_source.c:103]
1032594701 POINT "JSON_DATA" - {"string_field": "a-string-value", "int_field": 42, "bool_field": true, "hex_field": "0xDEADBEEF"} [../sample_src/s2_expectations_source.c:105]
1032604800 POINT "SET_NEW_STATE" - IDLE [../sample_src/s2_expectations_source.c:49]
1032604801 POINT "IDLE" - 04 00 00 00 [../sample_src/s2_expectations_source.c:78]
1032614800 POINT "SET_NEW_STATE" - END [../sample_src/s2_expectations_source.c:49]
1032614801 POINT "END" - 05 00 00 00 [../sample_src/s2_expectations_source.c:117]
    > 0 passed, 0 failed, 0 in progress, 0 not in use.
  ---------------------------------------------------------------------
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.

Disconnecting from device...
Saving result file...

Now, let's trace again, but include a filter expression for the test points:

stride  --device="TCP:localhost:8000" --database="../out/TestApp.sidb"  --run=do_state_changes.pl --trace="ACTIVE.*" 

..and now you should see fewer trace points emitted:

Loading database...
Connecting to device...
Executing...
  script "C:\s2\seaside\SDK\Windows\src\do_state_changes.pl"
1047379801 POINT "ACTIVE" - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]
1047389800 POINT "ACTIVE Previous State" - IDLE [../sample_src/s2_expectations_source.c:103]
    > 0 passed, 0 failed, 0 in progress, 0 not in use.
  ---------------------------------------------------------------------
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.

Disconnecting from device...
Saving result file...

The --trace argument accepts a filter expression which takes the form of a regular expression that is applied to the test point label. In this case, we've specified a filter that permits any test points that begin with ACTIVE. Filtering gives you a convenient way to inspect quickly inspet specific behavioral aspects of your STRIDE instrumented software.

Organizing with suites

Now let's briefly at how you can use the runner to organize subsets of test units into suites. First, let's run our current set of test units without any explicit suite hierarchy:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb"  --run=*

If you examine the results file, you will see that this creates the default flat hierarchy with each test unit's corresponding suite at the root level of the report.

Now, let's try grouping our tests into suites:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="/BasicTests{s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures; s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple}" --run="/srTest{s2_testclass::srTest::Dynamic; s2_testclass::srTest::Simple}"

Now when you view the results, you will see two top-level suites - BasicTests and srTest - and within those are the suites for the test units that we specified to be within each suite.

If you plan to use this functionality to organize your tests into subsuites, we recommend that you create options files to specify test unit groupings. This makes it easier to update and manage the suite hierarchy for your tests.

Notes

  1. 1.0 1.1 These examples assume you are executing the runner from the src directory of your off-target framework. If that's not the case, you will need to adjust the database path accordingly.