Difference between revisions of "Test Units Overview"

From STRIDE Wiki
Jump to: navigation, search
(Runtime Test Services)
(Unique STRIDE Test Unit Features)
 
(39 intermediate revisions by 4 users not shown)
Line 1: Line 1:
== Introduction  ==
 
 
STRIDE enables testing of C/C++ code through the use of [http://en.wikipedia.org/wiki/XUnit xUnit-style] test units. Test units can be written by developers, captured using an SCL pragma, and executed from the host. STRIDE facilitates the execution of some or all of the test units by automatically creating entry points for the execution of test units on the target.
 
 
 
== What are STRIDE Test Units? ==
 
== What are STRIDE Test Units? ==
  
'''STRIDE Test Units''' is a general term for [http://en.wikipedia.org/wiki/XUnit xUnit-style] test modules running within the STRIDE runtime framework. These tests--written in c and c++--are compiled and linked with your embedded software and run in-place on your target hardware. They are suitable for both developer unit testing as well as ongoing regression testing.
+
'''STRIDE Test Units''' is a general term for [http://en.wikipedia.org/wiki/XUnit xUnit-style] test modules running within the STRIDE runtime framework. These tests--written in C and C++--are compiled and linked with your embedded software and run in-place on your target hardware. They are suitable for both developer unit testing as well as end-to-end integration testing.
 
 
An external Test Runner is provided which controls the execution of the tests and publishes test results to the local filesystem and optionally to S2's Internet '''STRIDE Test Space'''.  
 
  
 +
An external [[Stride Runner|Test Runner]] is provided which controls the execution of the tests and publishes test results to the local file system and optionally to S2's Internet [[STRIDE Test Space]].
  
 
== Test Unit Features ==
 
== Test Unit Features ==
Line 14: Line 9:
 
* Specification of a test as a test method
 
* Specification of a test as a test method
 
* Aggregation of individual tests into test suites which form execution and reporting units
 
* Aggregation of individual tests into test suites which form execution and reporting units
* Specification of expected results within test methods (typically by using one or more Test Macros)
+
* Specification of expected results within test methods (typically by using one or more [[Test Code Macros]])
 
* Test fixturing (optional setup and teardown)
 
* Test fixturing (optional setup and teardown)
 +
* Test parametrization (optional constructor/initialization parameters)
 
* Automated execution
 
* Automated execution
 
* Automated results report generation
 
* Automated results report generation
  
As well as these unique features:
+
=== Unique STRIDE Test Unit Features ===
; Remote Execution
+
In addition, STRIDE Test Units offer these unique features:
:Execution and reporting controlled from a remote host, thus making the framework useful for on-target embedded system testing
+
; On-Target Execution
 +
: Tests execute on the target hardware in a true operational environment. Execution and reporting is controlled from a remote desktop (Windows, Linux or FreeBSD) host
 
; Dynamic Test and Suite Generation
 
; Dynamic Test and Suite Generation
 
: Test cases and suites can be created and manipulated at runtime
 
: Test cases and suites can be created and manipulated at runtime
; Test Doubles
+
; [[Using Test Doubles|Test Doubles]]
 
: Dynamic runtime function substitution to implement on-the-fly mocks, stubs, and doubles
 
: Dynamic runtime function substitution to implement on-the-fly mocks, stubs, and doubles
; Asynchronous Testing Framework
+
; [[Test Point Testing in C/C++|Behavior Testing]] (Test Points)
: Support for testing of asynchronous activities occurring in multiple threads
+
: Support for testing of asynchronous activities occurring in multiple threads  
 
; Multiprocess Testing Framework
 
; Multiprocess Testing Framework
 
: Support for testing across multiple processes running simultaneously on the target
 
: Support for testing across multiple processes running simultaneously on the target
 
; Automatic Timing Data Collection
 
; Automatic Timing Data Collection
: Automatic "time under test" collection
+
: Duration are automatically measured for each test case.
 
; Automatic Results Publishing to Local Disk and Internet
 
; Automatic Results Publishing to Local Disk and Internet
: Automatic publishing of test results to '''STRIDE Test Space'''
+
: Automatic publishing of test results to [[STRIDE Test Space]]
 
 
== Test Unit Deployment ==
 
Test Units implement three different test deployment strategies, each of which is demonstrated in the samples:
 
 
 
* test units based on '''C++ test classes''',
 
* test units based on '''C test functions''',
 
* test units based on a '''C language implementation of test classes''' (struct containing function pointers)
 
 
 
 
 
 
 
 
 
The required steps to get started with writing test units are as follows:
 
 
 
<ol>
 
<li>Write a test unit and capture it with one of the [[SCL_Pragmas#Test_Units|Test Units pragmas]].</li>
 
You may simply create a C++ class with a number of test methods and SCL capture it using [[scl_test_class]] pragma:
 
<source lang=cpp>
 
// testcpp.h
 
 
 
class Simple
 
{
 
public:
 
    int test1() { return  0;} // PASS
 
    int test2() { return 23;} // FAIL <>0
 
    bool test3() { return true;} // PASS
 
    bool test4() { return false;} // FAIL
 
};
 
 
 
#ifdef _SCL
 
#pragma scl_test_class(Simple)
 
#endif
 
</source>
 
 
 
Or, if you are writing in C, create a set of global functions and SCL capture them with [[scl_test_flist]] pragma (in more complicated scenarios when initialization is required [[scl_test_cclass]] pragma could be a better choice):
 
<source lang=c>
 
// testc.h
 
 
 
#ifdef __cplusplus
 
extern "C" {
 
#endif
 
 
 
int test1(void)
 
{
 
    return 0; // PASS
 
}
 
 
 
int test2(void)
 
{
 
    return 23; // FAIL <>0
 
}
 
 
 
#ifdef __cplusplus
 
}
 
#endif
 
 
 
#ifdef _SCL
 
#pragma scl_test_flist("Simple", test1, test2)
 
#endif
 
</source>
 
 
 
<li>Build and generate the IM code using STRIDE [[Build Tools]]:</li>
 
<pre>
 
> s2scompile --c++ testcpp.h
 
> s2scompile --c testc.h
 
> s2sbind --output=test.sidb testcpp.h.meta testc.h.meta
 
> s2sinstrument --im_name=test test.sidb
 
</pre>
 
''If using [[STRIDE Studio]], create a new workspace (or open an existing one), add the above source files, adjust your compiler settings, build and generate the IM manually through the UI, or write custom scripts to automate the same sequence.''
 
<li>Build the generate IM code along with the rest of the source to create your application's binary.
 
<li>Download your application to the Target and start it.</li>
 
<li>Execute your test units and publish results using the [[Test_Runners#TestUnitRun.pl|Test Unit Runner]].</li>
 
<pre>
 
> perl testunitrun.pl -u -d test.sidb
 
</pre>
 
''If using [[STRIDE Studio]], you can execute individual test units interactively by opening the user interface view corresponding to the test unit you would like to execute, then call it. Further more you may write a simple script to automate your [[#Scripting_a_Test_Unit|test units execution]] and result publishing.''
 
</ol>
 
 
 
== Requirements  ==
 
 
 
Several variations on typical xUnit-style test units are supported. The additional supported features include:
 
 
 
*Test status can be set using STRIDE Runtime APIs ''or'' by specifying simple return types for test methods.
 
*Integral return types: 0 = PASS; &lt;&gt; 0 = FAIL
 
*C++ bool return type: true = PASS; false = FAIL
 
*void return type with no explict status setting is assumed PASS
 
*Test writers can create additional child suites and tests at runtime by using Runtime APIs.
 
*We do not rely on exceptions for reporting of status.
 
*One of the [[SCL_Pragmas#Test_Units|Test Unit pragmas]] must be applied.
 
 
 
The STRIDE test class framework has the following requirements of each test class:
 
 
 
*The test class must have a suitable default (no-argument) constructor.
 
*The test class must have one or more public methods suitable as test methods. Allowable test methods always take no arguments (void) and return either void, simple integer types (int, short, long or char) or bool. At this time, we do not allow typedef types or macros for the return values specification.
 
*The [[scl_test_class]] pragma must be applied to the class.
 
 
 
 
 
=== Simple example using return values for status  ===
 
==== Using a Test Class ====
 
 
 
<source lang=cpp>
 
#include <srtest.h>
 
 
 
class Simple {
 
public:
 
    int tc_Int_ExpectPass() {return 0;}
 
    int tc_Int_ExpectFail() {return -1;}
 
    bool tc_Bool_ExpectPass() {return true;}
 
    bool tc_Bool_ExpectFail() {return false;}
 
};
 
 
 
#ifdef _SCL
 
#pragma scl_test_class(Simple)
 
#endif
 
</source>
 
 
 
==== Using a Test Function List ====
 
<source lang=c>
 
#include <srtest.h>
 
 
 
#ifdef __cplusplus
 
extern "C" {
 
#endif
 
 
 
int tf_Int_ExpectPass(void) {return 0;}
 
int tf_Int_ExpectFail(void) {return -1;}
 
 
 
#ifdef _SCL
 
#pragma scl_test_flist("Simple", tf_Int_ExpectPass, tf_Int_ExpectFail)
 
#endif
 
 
 
#ifdef __cplusplus
 
}
 
#endif
 
</source>
 
 
 
=== Simple example using runtime test service APIs  ===
 
==== Using a Test Class ====
 
<source lang=cpp>
 
#include <srtest.h>
 
 
 
class RuntimeServices_basic {
 
public:
 
  void tc_ExpectPass()
 
  {
 
    srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should pass");
 
    srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_PASS, 0);
 
  }
 
  void tc_ExpectFail()
 
  {
 
    srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should fail");
 
    srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_FAIL, 0);
 
  }
 
  void tc_ExpectInProgress()
 
  {
 
    srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should be in progress");
 
  }
 
};
 
 
 
#ifdef _SCL
 
#pragma scl_test_class(RuntimeServices_basic)
 
#endif
 
</source>
 
 
 
==== Using a Test Function List ====
 
<source lang=c>
 
#include <srtest.h>
 
 
 
#ifdef __cplusplus
 
extern "C" {
 
#endif
 
 
 
void tf_ExpectPass(void)
 
{
 
  srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should pass");
 
  srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_PASS, 0);
 
}
 
void tf_ExpectFail(void)
 
{
 
  srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should fail");
 
  srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_FAIL, 0);
 
}
 
void tf_ExpectInProgress(void)
 
{
 
  srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should be in progress");
 
}
 
 
 
#ifdef _SCL
 
#pragma scl_test_flist("RuntimeServices_basic", tf_ExpectPass, tf_ExpectFail, tf_ExpectInProgress)
 
#endif
 
 
#ifdef __cplusplus
 
}
 
#endif
 
</source>
 
 
 
=== Simple example using srTest base class  ===
 
<source lang=cpp>
 
#include <srtest.h>
 
 
 
class MyTest : public stride::srTest {
 
public:
 
  void tc_ExpectPass()
 
  {
 
    testCase.AddComment("this test should pass");
 
    testCase.SetStatus(srTEST_PASS, 0);
 
  }
 
  void tc_ExpectFail()
 
  {
 
    testCase.AddComment("this test should fail");
 
    testCase.SetStatus(srTEST_FAIL, 0);
 
  }
 
  void tc_ExpectInProgress()
 
  {
 
    testCase.AddComment("this test should be in progress");
 
  }
 
  int tc_ChangeMyName()
 
  {
 
    testCase.AddComment("this test should have name = MyChangedName");
 
    testCase.SetName("MyChangedName");
 
    return 0;
 
  }
 
  int tc_ChangeMyDescription()
 
  {
 
    testCase.AddComment("this test should have a description set");
 
    testCase.SetDescription("this is my new description");
 
    return 0;
 
  }
 
};
 
 
 
#ifdef _SCL
 
#pragma scl_test_class(MyTest)
 
#endif
 
</source>
 
 
 
 
 
 
 
== Test Macros ==
 
 
 
The STRIDE Test Unit implementation also provides a set of Test Macros (declared in srtest.h) available for use within test methods. The macros are optional - you are not required to use them in your test units. They provide shortcuts for testing assertions and automatic report annotation in the case of failures. 
 
 
 
The macros can be used in C++ and C test unit code (Note that there is no C version of exceptions test).
 
 
 
=== General guidelines for all macros ===
 
 
 
srEXPECT_xx macros will set the current test case to fail (if it hasn’t already been set) and produce an annotation in the report if the expectation fails. If the expectation succeeds, there is no action.
 
 
 
srASSERT_xx macros will set the current test case to fail (if it hasn’t already been set) and insert an annotation into the report if the assertion fails. In addition, a return from the current function will occur. srASSERT_xx macros can only be used in test functions which return void. If the assertion succeeds there is no action.
 
 
 
srLOG macro will add a new comment to the current test case and produce an annotation in the report with specified level of importance.
 
 
 
The report annotation produced by a failed macro always includes the source file and line along with details about the condition that failed and the failing values.
 
 
 
=== Boolean Macros ===
 
The boolean macros take a single condition expression, ''cond'', that evaluates to an integral type or bool. The condition will be evaluated once. if ''cond'' evaluates to non-zero the assertion or expectation fails. When a failure occurs the report will be annotated.
 
 
 
{| class="prettytable"
 
| colspan="3" | '''Boolean'''
 
 
 
|-
 
| srEXPECT_TRUE(''cond'');
 
| srASSERT_TRUE(''cond'');
 
| ''cond'' is true
 
 
 
|-
 
| srEXPECT_FALSE(''cond'');
 
| srASSERT_FALSE(''cond'');
 
| ''cond'' is false
 
 
 
|}
 
 
 
=== Comparison Macros ===
 
 
 
Comparison macros take two operands and compare them using the indicated operator. The comparison macros will work for primitive types as well as objects that have the corresponding comparison operator implemented. 
 
 
 
{| class="prettytable"
 
| colspan="3" | '''Comparison'''
 
 
 
|-
 
| srEXPECT_EQ(''val1'', ''val2'');
 
| srASSERT_EQ(''val1'', ''val2'');
 
| ''val1'' == ''val2''
 
 
 
|-
 
| srEXPECT_NE(''val1'', ''val2'');
 
| srASSERT_NE(''val1'', ''val2'');
 
| ''val1'' != ''val2''
 
 
 
|-
 
| srEXPECT_LT(''val1'', ''val2'');
 
| srASSERT_LT(''val1'', ''val2'');
 
| ''val1''<nowiki> < </nowiki>''val2''
 
 
 
|-
 
| srEXPECT_LE(''val1'', ''val2'');
 
| srASSERT_LE(''val1'', ''val2'');
 
| ''val1''<nowiki> <= </nowiki>''val2''
 
 
 
|-
 
| srEXPECT_GT(''val1'', ''val2'');
 
| srASSERT_GT(''val1'', ''val2'');
 
| ''val1'' > ''val2''
 
 
 
|-
 
| srEXPECT_GE(''val1'', ''val2'');
 
| srASSERT_GE(''val1'', ''val2'');
 
| ''val1'' >= ''val2''
 
 
 
|}
 
 
 
=== C String Comparison Macros ===
 
 
 
C String Comparison Macros are intended only for use with C-style zero terminated strings. The strings can be char or wchar_t based. In particular, these macros should not be used for object of one or other string class, since such classes have overloaded comparison operators. The standard comparison macros should be used instead.
 
 
 
* An empty string will appear in error message output as “”. A null string will appear as NULL with no surrounding quotes. Otherwise all output strings are quoted.
 
* The type of str1 and str2 must be compatible with ''const char*'' or ''const wchar_t*''.
 
 
 
{| class="prettytable"
 
| colspan="3" | '''C-string comparison'''
 
 
 
|-
 
| srEXPECT_STREQ(''str1'', ''str2'');
 
| srASSERT_STREQ(''str1'', ''str2'');
 
| ''str1'' and ''str2'' have the same content
 
 
 
|-
 
| srEXPECT_STRNE(''str1'', ''str2'');
 
| srASSERT_STRNE(''str1'', ''str2'');
 
| ''str1'' and ''str2'' have different content
 
 
 
|-
 
| srEXPECT_STRCASEEQ(''str1'', ''str2'');
 
| srASSERT_STRCASEEQ(''str1'', ''str2'');
 
| ''str1'' and ''str2'' have the same content, ignoring case.
 
 
 
|-
 
| srEXPECT_STRCASENE(''str1'', ''str2'');
 
| srASSERT_STRCASENE(''str1'', ''str2'');
 
| ''str1'' and ''str2'' have different content, ignoring case.
 
 
 
|}
 
 
 
=== Exception Macros ===
 
Exception macros  are used to ensure that expected exceptions are thrown. They require exception support from the target compiler. If the target compiler does not have exception support the macros cannot be used and must be disabled.
 
 
 
{| class="prettytable"
 
| colspan="3" | '''Exceptions'''
 
 
 
|-
 
| srEXPECT_THROW(statement, ex_type);
 
| srASSERT_THROW(statement, ex_type);
 
| ''statement'' throws an exception of type ''ex_type''
 
 
 
|-
 
| srEXPECT_THROW_ANY(''statement'');
 
| srASSERT_THROW_ANY(''statement'');
 
| ''statement'' throws an exception (type not important)
 
 
 
|-
 
| srEXPECT_NO_THROW(''statement'');
 
| srASSERT_NO_THROW(''statement'');
 
| ''statement'' does not throw an exception
 
 
 
|}
 
 
 
=== Predicate Macros ===
 
Predicate macros allow user control over the pass/fail decision making in a macro. A predicate is a function returning bool that is implemented by the user but passed to the macro. Other arguments for the predicate are also passed to the macro. The macros allow for predicate functions with up to four parameters.
 
 
 
{| class="prettytable"
 
| colspan="3" | '''Predicates'''
 
 
 
|-
 
| srEXPECT_PRED1(''pred'', ''val1'')
 
| srASSERT_PRED1(''pred'', ''val1'')
 
| ''pred''(''val1'') returns true
 
 
 
|-
 
| srEXPECT_PRED2(''pred'', ''val1'', ''val2'')
 
| srASSERT_PRED2(''pred'', ''val1'', ''val2'')
 
| ''pred''(''val1'', ''val2'') returns true
 
 
 
|-
 
| …(up to arity of 4)
 
|
 
|
 
 
 
|}
 
All predicate macros require a predicate function function which returns bool. The predicate macros allow functions with one to 4 parameters. Following are the report annotations resulting from expectation or assertion failures.
 
 
 
=== Floating Point Comparison Macros ===
 
Floating point macros are for comparing equivalence (or near equivalence) of floating point numbers. These macros are necessary since because equivalence comparisons for floating point numbers will often fail due to round-off errors.
 
 
 
{| class="prettytable"
 
| colspan="3" | '''Floating Point comparison'''
 
 
 
|-
 
| srEXPECT_NEAR(val1, val2, epsilon);
 
| srASSERT_NEAR(val1, val2, epsilon);
 
| The absolute value of the difference between val1 and val2 is epsilon.
 
|}
 
 
 
=== Log Macros ===
 
Log macros allow message logging with level of importance - error, warning, or info.
 
 
 
Note that the srLOG() macro can be used in threads other than the thread that a test is running on.
 
 
 
{| class="prettytable"
 
| colspan="2" | '''Logging'''
 
 
 
|-
 
| srLOG_ERROR(''message'')
 
| ''message'' is a char* to a null-terminated string
 
|-
 
| srLOG_WARNING(''message'')
 
| ''message'' is a char* to a null-terminated string
 
|-
 
| srLOG_INFO(''message'')
 
| ''message'' is a char* to a null-terminated string
 
|}
 
 
 
=== Dynamic Test Case Macros ===
 
The macros presented so far are not capable of dealing with dynamic test cases. In order to handle dynamic test cases, each of the macros requires another parameter which is the test case to report against. Other than this, these macros provide exactly equivalent functionality to the non-dynamic peer. The dynamic macros are listed below. All require a test case, value of type srTestCaseHandle_t from srtest.h,  to be passed as the first parameter).
 
 
 
{| class="prettytable"
 
| '''''Nonfatal assertion'''''
 
| '''''Fatal Assertion'''''
 
 
 
|-
 
| colspan="2" | '''Boolean'''
 
 
 
|-
 
| srEXPECT_TRUE_DYN(tc, ''cond'');
 
| srASSERT_TRUE_DYN(tc, ''cond'');
 
 
 
|-
 
| srEXPECT_FALSE_DYN(tc, ''cond'');
 
| srASSERT_FALSE_DYN(tc, ''cond'');
 
 
 
|-
 
| colspan="2" | '''Comparison'''
 
 
 
|-
 
| srEXPECT_EQ_DYN(tc, ''val1'', ''val2'');
 
| srASSERT_EQ_DYN(tc, ''expect'', ''val'');
 
 
 
|-
 
| srEXPECT_NE_DYN(tc, ''val1'', ''val2'');
 
| srASSERT_NE_DYN(tc, ''val1'', ''val2'');
 
 
 
|-
 
| srEXPECT_LT_DYN(tc, ''val1'', ''val2'');
 
| srASSERT_LT_DYN(tc, ''val1'', ''val2'');
 
 
 
|-
 
| srEXPECT_LE_DYN(tc, ''val1'', ''val2'');
 
| srASSERT_LE_DYN(tc, ''val1'', ''val2'');
 
 
 
|-
 
| srEXPECT_GT_DYN(tc, ''val1'', ''val2'');
 
| srASSERT_GT_DYN(tc, ''val1'', ''val2'');
 
 
 
|-
 
| srEXPECT_GE_DYN(tc, ''val1'', ''val2'');
 
| srASSERT_GE_DYN(tc, ''val1'', ''val2'');
 
 
 
|-
 
| colspan="2" | '''C-string comparison'''
 
 
 
|-
 
| srEXPECT_STREQ_DYN(tc, ''str1'', ''str2'');
 
| srASSERT_STREQ_DYN(tc, ''str1'', ''str2'');
 
 
 
|-
 
| srEXPECT_STRNE_DYN(tc, ''str1'', ''str2'');
 
| srASSERT_STRNE_DYN(tc, ''str1'', ''str2'');
 
 
 
|-
 
| srEXPECT_STRCASEEQ_DYN(tc, ''str1'', ''str2'');
 
| srASSERT_STRCASEEQ_DYN(tc, ''str1'', ''str2'');
 
 
 
|-
 
| srEXPECT_STRCASENE_DYN(tc, ''str1'', ''str2'');
 
| srASSERT_STRCASENE_DYN(tc, ''str1'', ''str2'');
 
 
 
|-
 
| colspan="2" | '''Exceptions'''
 
 
 
|-
 
| srEXPECT_THROW_DYN(statement, ex_type);
 
| srASSERT_THROW_DYN(tc, statement, ex_type);
 
 
 
|-
 
| srEXPECT_THROW_ANY_DYN(tc, ''statement'');
 
| srASSERT_THROW_ANY_DYN(tc, ''statement'');
 
 
 
|-
 
| srEXPECT_NO_THROW_DYN(tc, ''statement'');
 
| srASSERT_NO_THROW_DYN(tc, ''statement'');
 
 
 
|-
 
| colspan="2" | '''Predicates'''
 
 
 
|-
 
| srEXPECT_PRED1_DYN(tc, ''pred'', ''val1'');
 
| srASSERT_PRED1_DYN(tc, ''pred'', ''val1'');
 
 
 
|-
 
| srEXPECT_PRED2_DYN(tc, ''pred'', ''vall'', ''val2'');
 
| srASSERT_PRED2_DYN(tc, ''pred'', ''vall'', ''val2'');
 
 
 
|-
 
| …(up to arity of 4)
 
|
 
 
 
|-
 
| colspan="2" | '''Floating Point'''
 
 
 
|-
 
| srEXPECT_NEAR_DYN(tc, ''val1'', ''val2'', ''epsilon'');
 
| srASSERT_NEAR_DYN(tc, ''val1'', ''val2'', ''epsilon'');
 
 
 
|-
 
| colspan="2" | '''Logging'''
 
 
 
|-
 
| srLOG_DYN(tc, ''level'', ''message'');
 
|
 
 
 
|}
 
 
 
=== Use operator << for report annotation (C++ tests only) ===
 
 
 
In C++ test code all macros support the adding to the report annotations using the << operator. For example:
 
 
 
<source lang="cpp">
 
srEXPECT_TRUE(a != b) << "My custom message";
 
</source>
 
 
 
As delivered, the macros will support stream input annotations for all numeric types, "C" string (char* or wchar_t*) and types allowing implicit cast to numeric type or "C" string. The user may overload the << operator in order to annotate reports using any custom type. An example is below.
 
 
 
The following will compile and execute successfully given that the << operator is overloaded as shown:
 
 
 
<source lang="cpp">
 
#include <srtest.h>
 
 
 
// MyCustomClass implementation
 
class MyCustomClass
 
{
 
public:
 
  MyCustomClass(int i) : m_int(i) {}
 
 
 
private:
 
  int m_int;
 
  friend stride::Message& operator<<(stride::Message& ss, const MyCustomClass& obj);
 
};
 
 
 
stride::Message& operator<<(stride::Message& ss, const MyCustomClass& obj)
 
{
 
  ss << obj.m_int;
 
  return ss;
 
}
 
 
 
void test()
 
{
 
    MyCustomClass custom(34);
 
 
 
    srEXPECT_FALSE(true) << custom;
 
}
 
</source>
 
 
 
== Using Testpoints ==
 
Testpoints are covered in the article [[Using Testpoints]].
 
 
 
== Scripting a Test Unit ==
 
 
 
To automate the execution and reporting of a Test Unit a script is required. Scripts can be written by hand or automatically generated using the Script Wizard and a corresponding template script. A scripting tool for executing a test unit is the [[AutoScript#ascript.TestUnits|AutoScript TestUnits]] collection. An [[AutoScript#ascript.TestUnits.Item|Ascript TestUnit]] object assembles all of the reporting information for the test unit and its corresponding test methods.
 
 
 
*Require usage of the [[AutoScript#ascript.TestUnits|AutoScript TestUnits]] collection
 
*Can be written by hand (refer below)
 
*Can leverage [[Templates|Templates]] via the Script Wizard
 
*Order of multiple test units dictated by SUID assignment
 
 
 
 
 
=== Single test unit example ===
 
 
 
The following example script is used to harness a test unit that has been captured using #pragma scl_test_class(Simple).
 
 
 
'''JavaScript'''
 
<source lang=javascript>
 
var tu = ascript.TestUnits.Item("Simple");
 
// Ensure test unit exists
 
if (tu != null)
 
  tu.Run();
 
</source>
 
 
 
'''Perl'''
 
<source lang=perl>
 
use strict;
 
use Win32::OLE;
 
Win32::OLE->Option(Warn => 3);
 
 
 
my $tu = $main::ascript->TestUnits->Item("Simple");
 
if (defined $tu) {
 
  $tu->Run();
 
}
 
</source>
 
 
 
=== Multiple test units example ===
 
 
 
The following example script is used to harness two test units that have been captured using #pragma scl_test_class(Simple1) and #pragma scl_test_class(Simple2).
 
 
 
'''JavaScript'''
 
<source lang=javascript>
 
var Units = ["Simple1","Simple2"];
 
 
 
// iterate through each function
 
for (i in Units)
 
{
 
  var tu = ascript.TestUnits.Item(Units[i]);
 
  if ( tu != null )
 
    tu.Run();
 
}
 
</source>
 
 
 
'''Perl'''
 
<source lang=perl>
 
use strict;
 
use Win32::OLE;
 
Win32::OLE->Option(Warn => 3);
 
 
 
# initialize an array with all selected function names
 
my @UnitNames = ("Simple1","Simple2");
 
foreach (@UnitNames) { 
 
  my $tu = $main::ascript->TestUnits->Item($_->[1]);
 
  die "TestUnit not found: $_->[1]\n" unless (defined $tu);
 
  $tu->Run();
 
}
 
</source>
 
  
 
[[Category:Test Units]]
 
[[Category:Test Units]]
[[Category:Reference]]
 

Latest revision as of 17:09, 25 September 2013

What are STRIDE Test Units?

STRIDE Test Units is a general term for xUnit-style test modules running within the STRIDE runtime framework. These tests--written in C and C++--are compiled and linked with your embedded software and run in-place on your target hardware. They are suitable for both developer unit testing as well as end-to-end integration testing.

An external Test Runner is provided which controls the execution of the tests and publishes test results to the local file system and optionally to S2's Internet STRIDE Test Space.

Test Unit Features

In all cases, STRIDE Test Units provide the following capabilities typical of all xUnit-style testing frameworks:

  • Specification of a test as a test method
  • Aggregation of individual tests into test suites which form execution and reporting units
  • Specification of expected results within test methods (typically by using one or more Test Code Macros)
  • Test fixturing (optional setup and teardown)
  • Test parametrization (optional constructor/initialization parameters)
  • Automated execution
  • Automated results report generation

Unique STRIDE Test Unit Features

In addition, STRIDE Test Units offer these unique features:

On-Target Execution
Tests execute on the target hardware in a true operational environment. Execution and reporting is controlled from a remote desktop (Windows, Linux or FreeBSD) host
Dynamic Test and Suite Generation
Test cases and suites can be created and manipulated at runtime
Test Doubles
Dynamic runtime function substitution to implement on-the-fly mocks, stubs, and doubles
Behavior Testing (Test Points)
Support for testing of asynchronous activities occurring in multiple threads
Multiprocess Testing Framework
Support for testing across multiple processes running simultaneously on the target
Automatic Timing Data Collection
Duration are automatically measured for each test case.
Automatic Results Publishing to Local Disk and Internet
Automatic publishing of test results to STRIDE Test Space