View Source Writing Test Suites
Support for Test Suite Authors
The ct
module provides the main interface for writing test cases. This
includes for example, the following:
- Functions for printing and logging
- Functions for reading configuration data
- Function for terminating a test case with error reason
- Function for adding comments to the HTML overview page
For details about these functions, see module ct
.
The Common Test
application also includes other modules named
ct_<component>
, which provide various support, mainly simplified use of
communication protocols such as RPC, SNMP, FTP, Telnet, and others.
Test Suites
A test suite is an ordinary Erlang module that contains test cases. It is
recommended that the module has a name on the form *_SUITE.erl
. Otherwise, the
directory and auto compilation function in Common Test
cannot locate it (at
least not by default).
It is also recommended that the ct.hrl
header file is included in all test
suite modules.
Each test suite module must export function all/0
, which
returns the list of all test case groups and test cases to be executed in that
module.
The callback functions to be implemented by the test suite are all listed in module ct_suite . They are also described in more detail later in this User's Guide.
Init and End per Suite
Each test suite module can contain the optional configuration functions
init_per_suite/1
and
end_per_suite/1
. If the init function is
defined, so must the end function be.
If init_per_suite
exists, it is called initially before the test cases are
executed. It typically contains initializations common for all test cases in the
suite, which are only to be performed once. init_per_suite
is recommended for
setting up and verifying state and environment on the System Under Test (SUT) or
the Common Test
host node, or both, so that the test cases in the suite
executes correctly. The following are examples of initial configuration
operations:
- Opening a connection to the SUT
- Initializing a database
- Running an installation script
end_per_suite
is called as the final stage of the test suite execution (after
the last test case has finished). The function is meant to be used for cleaning
up after init_per_suite
.
init_per_suite
and end_per_suite
execute on dedicated Erlang processes, just
like the test cases do. The result of these functions is however not included in
the test run statistics of successful, failed, and skipped cases.
The argument to init_per_suite
is Config
, that is, the same key-value list
of runtime configuration data that each test case takes as input argument.
init_per_suite
can modify this parameter with information that the test cases
need. The possibly modified Config
list is the return value of the function.
If init_per_suite
fails, all test cases in the test suite are skipped
automatically (so called auto skipped), including end_per_suite
.
Notice that if init_per_suite
and end_per_suite
do not exist in the suite,
Common Test
calls dummy functions (with the same names) instead, so that
output generated by hook functions can be saved to the log files for these
dummies. For details, see Common Test Hooks.
Init and End per Test Case
Each test suite module can contain the optional configuration functions
init_per_testcase/2
and
end_per_testcase/2
. If the init function is
defined, so must the end function be.
If init_per_testcase
exists, it is called before each test case in the suite.
It typically contains initialization that must be done for each test case
(analog to init_per_suite
for the suite).
end_per_testcase/2
is called after each test case has finished, enabling
cleanup after init_per_testcase
.
Note
If
end_per_testcase
crashes, however, test results are unaffected. At the same time, this occurrence is reported in the test execution logs.
The first argument to these functions is the name of the test case. This value can be used with pattern matching in function clauses or conditional expressions to choose different initialization and cleanup routines for different test cases, or perform the same routine for many, or all, test cases.
The second argument is the Config
key-value list of runtime configuration
data, which has the same value as the list returned by init_per_suite
.
init_per_testcase/2
can modify this parameter or return it "as is". The return
value of init_per_testcase/2
is passed as parameter Config
to the test case
itself.
The return value of end_per_testcase/2
is ignored by the test server, with
exception of the save_config
and fail
tuple.
end_per_testcase
can check if the test case was successful. (which in turn can
determine how cleanup is to be performed). This is done by reading the value
tagged with tc_status
from Config
. The value is one of the following:
ok
{failed,Reason}
where
Reason
istimetrap_timeout
, information fromexit/1
, or details of a runtime error{skipped,Reason}
where
Reason
is a user-specific term
Function end_per_testcase/2
is even called if a test case terminates because
of a call to ct:abort_current_testcase/1
, or after a timetrap time-out.
However, end_per_testcase
then executes on a different process than the test
case function. In this situation, end_per_testcase
cannot change the reason
for test case termination by returning {fail,Reason}
or save data with
{save_config,Data}
.
The test case is skipped in the following two cases:
- If
init_per_testcase
crashes (called auto skipped). - If
init_per_testcase
returns a tuple{skip,Reason}
(called user skipped).
The test case can also be marked as failed without executing it by returning a
tuple {fail,Reason}
from init_per_testcase
.
Note
If
init_per_testcase
crashes, or returns{skip,Reason}
or{fail,Reason}
, functionend_per_testcase
is not called.
If it is determined during execution of end_per_testcase
that the status of a
successful test case is to be changed to failed, end_per_testcase
can return
the tuple {fail,Reason}
(where Reason
describes why the test case fails).
As init_per_testcase
and end_per_testcase
execute on the same Erlang process
as the test case, printouts from these configuration functions are included in
the test case log file.
Test Cases
The smallest unit that the test server is concerned with is a test case. Each test case can test many things, for example, make several calls to the same interface function with different parameters.
The author can choose to put many or few tests into each test case. Some things to keep in mind follows:
- Many small test cases tend to result in extra, and possibly duplicated code, as well as slow test execution because of large overhead for initializations and cleanups. Avoid duplicated code, for example, by using common help functions. Otherwise, the resulting suite becomes difficult to read and understand, and expensive to maintain.
- Larger test cases make it harder to tell what went wrong if it fails. Also, large portions of test code risk being skipped when errors occur.
- Readability and maintainability suffer when test cases become too large and extensive. It is not certain that the resulting log files reflect very well the number of tests performed.
The test case function takes one argument, Config
, which contains
configuration information such as data_dir
and priv_dir
. (For details about
these, see section
Data and Private Directories. The value
of Config
at the time of the call, is the same as the return value from
init_per_testcase
, mentioned earlier.
Note
The test case function argument
Config
is not to be confused with the information that can be retrieved from the configuration files (usingct:get_config/1/2
). The test case argumentConfig
is to be used for runtime configuration of the test suite and the test cases, while configuration files are to contain data related to the SUT. These two types of configuration data are handled differently.
As parameter Config
is a list of key-value tuples, that is, a data type called
a property list, it can be handled by the proplists
module. A value can, for
example, be searched for and returned with function proplists:get_value/2
.
Also, or alternatively, the general lists
module contains useful functions.
Normally, the only operations performed on Config
are insertion (adding a
tuple to the head of the list) and lookup. To look up a value in the config,
proplists:get_value
can be used. For example:
PrivDir = proplists:get_value(priv_dir, Config)
.
The test case result can be customized in several ways. See the manual for
Module:Testcase/1
in the ct_suite
module for
details.
Test Case Information Function
For each test case function there can be an extra function with the same name but without arguments. This is the test case information function. It is expected to return a list of tagged tuples that specifies various properties regarding the test case.
The following tags have special meaning:
timetrap
- Sets the maximum time the test case is allowed to execute. If this time is exceeded, the test case fails with reasontimetrap_timeout
. Notice thatinit_per_testcase
andend_per_testcase
are included in the timetrap time. For details, see section Timetrap Time-Outs.userdata
- Specifies any data related to the test case. This data can be retrieved at any time using thect:userdata/3
utility function.silent_connections
- For details, see section Silent Connections.require
- Specifies configuration variables required by the test case. If the required configuration variables are not found in any of the test system configuration files, the test case is skipped.A required variable can also be given a default value to be used if the variable is not found in any configuration file. To specify a default value, add a tuple on the form
{default_config,ConfigVariableName,Value}
to the test case information list (the position in the list is irrelevant).Examples:
testcase1() -> [{require, ftp}, {default_config, ftp, [{ftp, "my_ftp_host"}, {username, "aladdin"}, {password, "sesame"}]}}].
testcase2() -> [{require, unix_telnet, unix}, {require, {unix, [telnet, username, password]}}, {default_config, unix, [{telnet, "my_telnet_host"}, {username, "aladdin"}, {password, "sesame"}]}}].
For more information about require
, see section
Requiring and Reading Configuration Data
in section External Configuration Data and function
ct:require/1/2
.
Note
Specifying a default value for a required variable can result in a test case always getting executed. This might not be a desired behavior.
If timetrap
or require
, or both, is not set specifically for a particular
test case, default values specified by function
suite/0
are used.
Tags other than the earlier mentioned are ignored by the test server.
An example of a test case information function follows:
reboot_node() ->
[
{timetrap,{seconds,60}},
{require,interfaces},
{userdata,
[{description,"System Upgrade: RpuAddition Normal RebootNode"},
{fts,"http://someserver.ericsson.se/test_doc4711.pdf"}]}
].
Test Suite Information Function
Function suite/0
can, for example, be used in a test
suite module to set a default timetrap
value and to require
external
configuration data. If a test case, or a group information function also
specifies any of the information tags, it overrides the default values set by
suite/0
. For details, see
Test Case Information Function and
Test Case Groups.
The following options can also be specified with the suite information list:
stylesheet
, see HTML Style Sheetsuserdata
, see Test Case Information Functionsilent_connections
, see Silent Connections
An example of the suite information function follows:
suite() ->
[
{timetrap,{minutes,10}},
{require,global_names},
{userdata,[{info,"This suite tests database transactions."}]},
{silent_connections,[telnet]},
{stylesheet,"db_testing.css"}
].
Test Case Groups
A test case group is a set of test cases sharing configuration functions and
execution properties. Test case groups are defined by function
groups/0
that should return a term having the
following syntax:
groups() -> GroupDefs
Types:
GroupDefs = [GroupDef]
GroupDef = {GroupName,Properties,GroupsAndTestCases}
GroupName = atom()
GroupsAndTestCases = [GroupDef | {group,GroupName} | TestCase |
{testcase,TestCase,TCRepeatProps}]
TestCase = atom()
TCRepeatProps = [{repeat,N} | {repeat_until_ok,N} | {repeat_until_fail,N}]
GroupName
is the name of the group and must be unique within the test suite
module. Groups can be nested, by including a group definition within the
GroupsAndTestCases
list of another group. Properties
is the list of
execution properties for the group. The possible values are as follows:
Properties = [parallel | sequence | Shuffle | {GroupRepeatType,N}]
Shuffle = shuffle | {shuffle,Seed}
Seed = {integer(),integer(),integer()}
GroupRepeatType = repeat | repeat_until_all_ok | repeat_until_all_fail |
repeat_until_any_ok | repeat_until_any_fail
N = integer() | forever
Explanations:
parallel
-Common Test
executes all test cases in the group in parallel.sequence
- The cases are executed in a sequence as described in section Sequences in section Dependencies Between Test Cases and Suites.shuffle
- The cases in the group are executed in random order.repeat, repeat_until_*
- OrdersCommon Test
to repeat execution of all the cases in the group a given number of times, or until any, or all, cases fail or succeed.
Example:
groups() -> [{group1, [parallel], [test1a,test1b]},
{group2, [shuffle,sequence], [test2a,test2b,test2c]}].
To specify in which order groups are to be executed (also with respect to test
cases that are not part of any group), add tuples on the form
{group,GroupName}
to the all/0
list.
Example:
all() -> [testcase1, {group,group1}, {testcase,testcase2,[{repeat,10}]}, {group,group2}].
Execution properties with a group tuple in all/0
:
{group,GroupName,Properties}
can also be specified. These properties override
those specified in the group definition (see groups/0
earlier). This way, the
same set of tests can be run, but with different properties, without having to
make copies of the group definition in question.
If a group contains subgroups, the execution properties for these can also be
specified in the group tuple: {group,GroupName,Properties,SubGroups}
Where,
SubGroups
is a list of tuples, {GroupName,Properties}
or
{GroupName,Properties,SubGroups}
representing the subgroups. Any subgroups
defined in groups/0
for a group, that are not specified in the SubGroups
list, executes with their predefined properties.
Example:
groups() -> [{tests1, [], [{tests2, [], [t2a,t2b]},
{tests3, [], [t31,t3b]}]}].
To execute group tests1
twice with different properties for tests2
each
time:
all() ->
[{group, tests1, default, [{tests2, [parallel]}]},
{group, tests1, default, [{tests2, [shuffle,{repeat,10}]}]}].
This is equivalent to the following specification:
all() ->
[{group, tests1, default, [{tests2, [parallel]},
{tests3, default}]},
{group, tests1, default, [{tests2, [shuffle,{repeat,10}]},
{tests3, default}]}].
Value default
states that the predefined properties are to be used.
The following example shows how to override properties in a scenario with deeply nested groups:
groups() ->
[{tests1, [], [{group, tests2}]},
{tests2, [], [{group, tests3}]},
{tests3, [{repeat,2}], [t3a,t3b,t3c]}].
all() ->
[{group, tests1, default,
[{tests2, default,
[{tests3, [parallel,{repeat,100}]}]}]}].
For ease of readability, all syntax definitions can be replaced by a function call whose return value should match the expected syntax case.
Example:
all() ->
[{group, tests1, default, test_cases()},
{group, tests1, default, [shuffle_test(),
{tests3, default}]}].
test_cases() ->
[{tests2, [parallel]}, {tests3, default}].
shuffle_test() ->
{tests2, [shuffle,{repeat,10}]}.
The described syntax can also be used in test specifications to change group properties at the time of execution, without having to edit the test suite. For more information, see section Test Specifications in section Running Tests and Analyzing Results.
As illustrated, properties can be combined. If, for example, shuffle
,
repeat_until_any_fail
, and sequence
are all specified, the test cases in the
group are executed repeatedly, and in random order, until a test case fails.
Then execution is immediately stopped and the remaining cases are skipped.
Before execution of a group begins, the configuration function
init_per_group(GroupName, Config)
is called.
The list of tuples returned from this function is passed to the test cases in
the usual manner by argument Config
. init_per_group/2
is meant to be used
for initializations common for the test cases in the group. After execution of
the group is finished, function
end_per_group(GroupName, Config)
is called.
This function is meant to be used for cleaning up after init_per_group/2
. If
the init function is defined, so must the end function be.
Whenever a group is executed, if init_per_group
and end_per_group
do not
exist in the suite, Common Test
calls dummy functions (with the same names)
instead. Output generated by hook functions are saved to the log files for these
dummies. For more information, see section
Manipulating Tests in section Common Test
Hooks.
Note
init_per_testcase/2
andend_per_testcase/2
are always called for each individual test case, no matter if the case belongs to a group or not.
The properties for a group are always printed in the top of the HTML log for
init_per_group/2
. The total execution time for a group is included at the
bottom of the log for end_per_group/2
.
Test case groups can be nested so sets of groups can be configured with the same
init_per_group/2
and end_per_group/2
functions. Nested groups can be defined
by including a group definition, or a group name reference, in the test case
list of another group.
Example:
groups() -> [{group1, [shuffle], [test1a,
{group2, [], [test2a,test2b]},
test1b]},
{group3, [], [{group,group4},
{group,group5}]},
{group4, [parallel], [test4a,test4b]},
{group5, [sequence], [test5a,test5b,test5c]}].
In the previous example, if all/0
returns group name references in the order
[{group,group1},{group,group3}]
, the order of the configuration functions and
test cases becomes the following (notice that init_per_testcase/2
and
end_per_testcase/2:
are also always called, but not included in this example
for simplification):
init_per_group(group1, Config) -> Config1 (*)
test1a(Config1)
init_per_group(group2, Config1) -> Config2
test2a(Config2), test2b(Config2)
end_per_group(group2, Config2)
test1b(Config1)
end_per_group(group1, Config1)
init_per_group(group3, Config) -> Config3
init_per_group(group4, Config3) -> Config4
test4a(Config4), test4b(Config4) (**)
end_per_group(group4, Config4)
init_per_group(group5, Config3) -> Config5
test5a(Config5), test5b(Config5), test5c(Config5)
end_per_group(group5, Config5)
end_per_group(group3, Config3)
(*) The order of test case test1a
, test1b
, and group2
is undefined, as
group1
has a shuffle property.
(**) These cases are not executed in order, but in parallel.
Properties are not inherited from top-level groups to nested subgroups. For
instance, in the previous example, the test cases in group2
are not executed
in random order (which is the property of group1
).
Parallel Property and Nested Groups
If a group has a parallel property, its test cases are spawned simultaneously
and get executed in parallel. However, a test case is not allowed to execute in
parallel with end_per_group/2
, which means that the time to execute a parallel
group is equal to the execution time of the slowest test case in the group. A
negative side effect of running test cases in parallel is that the HTML summary
pages are not updated with links to the individual test case logs until function
end_per_group/2
for the group has finished.
A group nested under a parallel group starts executing in parallel with previous
(parallel) test cases (no matter what properties the nested group has). However,
as test cases are never executed in parallel with init_per_group/2
or
end_per_group/2
of the same group, it is only after a nested group has
finished that remaining parallel cases in the previous group become spawned.
Parallel Test Cases and I/O
A parallel test case has a private I/O server as its group leader. (For a description of the group leader concept, see ERTS). The central I/O server process, which handles the output from regular test cases and configuration functions, does not respond to I/O messages during execution of parallel groups. This is important to understand to avoid certain traps, like the following:
If a process, P
, is spawned during execution of, for example,
init_per_suite/1
, it inherits the group leader of the init_per_suite
process. This group leader is the central I/O server process mentioned earlier.
If, at a later time, during parallel test case execution, some event triggers
process P
to call io:format/1/2
, that call never returns
(as the group leader is in a non-responsive state) and causes P
to hang.
Repeated Groups
A test case group can be repeated a certain number of times (specified by an
integer) or indefinitely (specified by forever
). The repetition can also be
stopped too early if any or all cases fail or succeed, that is, if any of the
properties repeat_until_any_fail
, repeat_until_any_ok
,
repeat_until_all_fail
, or repeat_until_all_ok
is used. If the basic repeat
property is used, status of test cases is irrelevant for the repeat operation.
The status of a subgroup can be returned (ok
or failed
), to affect the
execution of the group on the level above. This is accomplished by, in
end_per_group/2
, looking up the value of tc_group_properties
in the Config
list and checking the result of the test cases in the group. If status failed
is to be returned from the group as a result, end_per_group/2
is to return the
value {return_group_result,failed}
. The status of a subgroup is taken into
account by Common Test
when evaluating if execution of a group is to be
repeated or not (unless the basic repeat
property is used).
The value of tc_group_properties
is a list of status tuples, each with the key
ok
, skipped
, and failed
. The value of a status tuple is a list with names
of test cases that have been executed with the corresponding status as result.
The following is an example of how to return the status from a group:
end_per_group(_Group, Config) ->
Status = proplists:get_value(tc_group_result, Config),
case proplists:get_value(failed, Status) of
[] -> % no failed cases
{return_group_result,ok};
_Failed -> % one or more failed
{return_group_result,failed}
end.
It is also possible, in end_per_group/2
, to check the status of a subgroup
(maybe to determine what status the current group is to return). This is as
simple as illustrated in the previous example, only the group name is stored in
a tuple {group_result,GroupName}
, which can be searched for in the status
lists.
Example:
end_per_group(group1, Config) ->
Status = proplists:get_value(tc_group_result, Config),
Failed = proplists:get_value(failed, Status),
case lists:member({group_result,group2}, Failed) of
true ->
{return_group_result,failed};
false ->
{return_group_result,ok}
end;
...
Note
When a test case group is repeated, the configuration functions
init_per_group/2
andend_per_group/2
are also always called with each repetition.
Shuffled Test Case Order
The order in which test cases in a group are executed is under normal
circumstances the same as the order specified in the test case list in the group
definition. With property shuffle
set, however, Common Test
instead executes
the test cases in random order.
You can provide a seed value (a tuple of three integers) with the shuffle
property {shuffle,Seed}
. This way, the same shuffling order can be created
every time the group is executed. If no seed value is specified, Common Test
creates a "random" seed for the shuffling operation (using the return value of
erlang:timestamp/0
). The seed value is always printed to the
init_per_group/2
log file so that it can be used to recreate the same
execution order in a subsequent test run.
Note
If a shuffled test case group is repeated, the seed is not reset between turns.
If a subgroup is specified in a group with a shuffle
property, the execution
order of this subgroup in relation to the test cases (and other subgroups) in
the group, is random. The order of the test cases in the subgroup is however not
random (unless the subgroup has a shuffle
property).
Group Information Function
The test case group information function, group(GroupName)
, serves the same
purpose as the suite- and test case information functions previously described.
However, the scope for the group information function, is all test cases and
subgroups in the group in question (GroupName
).
Example:
group(connection_tests) ->
[{require,login_data},
{timetrap,1000}].
The group information properties override those set with the suite information function, and can in turn be overridden by test case information properties. For a list of valid information properties and more general information, see the Test Case Information Function.
Information Functions for Init- and End-Configuration
Information functions can also be used for functions init_per_suite
,
end_per_suite
, init_per_group
, and end_per_group
, and they work the same
way as with the
Test Case Information Function. This is
useful, for example, for setting timetraps and requiring external configuration
data relevant only for the configuration function in question (without affecting
properties set for groups and test cases in the suite).
The information function init/end_per_suite()
is called for
init/end_per_suite(Config)
, and information function
init/end_per_group(GroupName)
is called for
init/end_per_group(GroupName,Config)
. However, information functions cannot be
used with init/end_per_testcase(TestCase, Config)
, as these configuration
functions execute on the test case process and use the same properties as the
test case (that is, the properties set by the test case information function,
TestCase()
). For a list of valid information properties and more general
information, see the
Test Case Information Function.
Data and Private Directories
In the data directory, data_dir
, the test module has its own files needed for
the testing. The name of data_dir
is the name of the test suite followed by
"_data"
. For example, "some_path/foo_SUITE.beam"
has the data directory
"some_path/foo_SUITE_data/"
. Use this directory for portability, that is, to
avoid hardcoding directory names in your suite. As the data directory is stored
in the same directory as your test suite, you can rely on its existence at
runtime, even if the path to your test suite directory has changed between test
suite implementation and execution.
priv_dir
is the private directory for the test cases. This directory can be
used whenever a test case (or configuration function) needs to write something
to file. The name of the private directory is generated by Common Test
, which
also creates the directory.
By default, Common Test
creates one central private directory per test run,
shared by all test cases. This is not always suitable. Especially if the same
test cases are executed multiple times during a test run (that is, if they
belong to a test case group with property repeat
) and there is a risk that
files in the private directory get overwritten. Under these circumstances,
Common Test
can be configured to create one dedicated private directory per
test case and execution instead. This is accomplished with the flag/option
create_priv_dir
(to be used with the ct_run
program, the
ct:run_test/1
function, or as test specification term). There are three
possible values for this option as follows:
auto_per_run
auto_per_tc
manual_per_tc
The first value indicates the default priv_dir
behavior, that is, one private
directory created per test run. The two latter values tell Common Test
to
generate a unique test directory name per test case and execution. If the auto
version is used, all private directories are created automatically. This can
become very inefficient for test runs with many test cases or repetitions, or
both. Therefore, if the manual version is used instead, the test case must tell
Common Test
to create priv_dir
when it needs it. It does this by calling the
function ct:make_priv_dir/0
.
Note
Do not depend on the current working directory for reading and writing data files, as this is not portable. All scratch files are to be written in the
priv_dir
and all data files are to be located indata_dir
. Also, theCommon Test
server sets the current working directory to the test case log directory at the start of every case.
Execution Environment
Each test case is executed by a dedicated Erlang process. The process is spawned
when the test case starts, and terminated when the test case is finished. The
configuration functions init_per_testcase
and end_per_testcase
execute on
the same process as the test case.
The configuration functions init_per_suite
and end_per_suite
execute, like
test cases, on dedicated Erlang processes.
Timetrap Time-Outs
The default time limit for a test case is 30 minutes, unless a timetrap
is
specified either by the suite-, group-, or test case information function. The
timetrap time-out value defined by suite/0
is the value that is used for each
test case in the suite (and for the configuration functions init_per_suite/1
,
end_per_suite/1
, init_per_group/2
, and end_per_group/2
). A timetrap value
defined by group(GroupName)
overrides one defined by suite()
and is used for
each test case in group GroupName
, and any of its subgroups. If a timetrap
value is defined by group/1
for a subgroup, it overrides that of its higher
level groups. Timetrap values set by individual test cases (by the test case
information function) override both group- and suite- level timetraps.
A timetrap can also be set or reset dynamically during the execution of a test
case, or configuration function. This is done by calling ct:timetrap/1
. This
function cancels the current timetrap and starts a new one (that stays active
until time-out, or end of the current function).
Timetrap values can be extended with a multiplier value specified at startup
with option multiply_timetraps
. It is also possible to let the test server
decide to scale up timetrap time-out values automatically. That is, if tools
such as cover
or trace
are running during the test. This feature is disabled
by default and can be enabled with start option scale_timetraps
.
If a test case needs to suspend itself for a time that also gets multiplied by
multiply_timetraps
(and possibly also scaled up if scale_timetraps
is
enabled), the function ct:sleep/1
can be used (instead of, for example,
timer:sleep/1
).
A function (fun/0
or {Mod,Func,Args}
(MFA) tuple) can be specified as
timetrap value in the suite-, group- and test case information function, and as
argument to function ct:timetrap/1
.
Examples:
{timetrap,{my_test_utils,timetrap,[?MODULE,system_start]}}
ct:timetrap(fun() -> my_timetrap(TestCaseName, Config) end)
The user timetrap function can be used for two things as follows:
- To act as a timetrap. The time-out is triggered when the function returns.
- To return a timetrap time value (other than a function).
Before execution of the timetrap function (which is performed on a parallel,
dedicated timetrap process), Common Test
cancels any previously set timer for
the test case or configuration function. When the timetrap function returns, the
time-out is triggered, unless the return value is a valid timetrap time, such
as an integer, or a {SecMinOrHourTag,Time}
tuple (for details, see module
ct_suite
). If a time value is returned, a new timetrap is started to
generate a time-out after the specified time.
The user timetrap function can return a time value after a delay. The effective timetrap time is then the delay time plus the returned time.
Logging - Categories and Verbosity Levels
Common Test
provides the following three main functions for printing strings:
ct:log(Category, Importance, Format, FormatArgs, Opts)
ct:print(Category, Importance, Format, FormatArgs)
ct:pal(Category, Importance, Format, FormatArgs)
The log/1,2,3,4,5
function prints a string to the test case log
file. The print/1,2,3,4
function prints the string to screen.
The pal/1,2,3,4
function prints the same string both to file and
screen. The functions are described in module ct
.
The optional Category
argument can be used to categorize the log printout.
Categories can be used for two things as follows:
- To compare the importance of the printout to a specific verbosity level.
- To format the printout according to a user-specific HTML Style Sheet (CSS).
Argument Importance
specifies a level of importance that, compared to a
verbosity level (general and/or set per category), determines if the printout is
to be visible. Importance
is any integer in the range 0..99. Predefined
constants exist in the ct.hrl
header file. The default importance level,
?STD_IMPORTANCE
(used if argument Importance
is not provided), is 50. This
is also the importance used for standard I/O, for example, from printouts made
with io:format/2
, io:put_chars/1
, and so on.
Importance
is compared to a verbosity level set by the verbosity
start
flag/option. The level can be set per category or generally, or both. If
verbosity
is not set by the user, a level of 100 (?MAX_VERBOSITY
= all
printouts visible) is used as default value. Common Test
performs the
following test:
Importance >= (100-VerbosityLevel)
The constant ?STD_VERBOSITY
has value 50 (see ct.hrl
). At this level, all
standard I/O gets printed. If a lower verbosity level is set, standard I/O
printouts are ignored. Verbosity level 0 effectively turns all logging off
(except from printouts made by Common Test
itself).
The general verbosity level is not associated with any particular category. This
level sets the threshold for the standard I/O printouts, uncategorized
ct:log/print/pal
printouts, and printouts for categories with undefined
verbosity level.
Examples:
Some printouts during test case execution:
io:format("1. Standard IO, importance = ~w~n", [?STD_IMPORTANCE]),
ct:log("2. Uncategorized, importance = ~w", [?STD_IMPORTANCE]),
ct:log(info, "3. Categorized info, importance = ~w", [?STD_IMPORTANCE]),
ct:log(info, ?LOW_IMPORTANCE, "4. Categorized info, importance = ~w", [?LOW_IMPORTANCE]),
ct:log(error, ?HI_IMPORTANCE, "5. Categorized error, importance = ~w", [?HI_IMPORTANCE]),
ct:log(error, ?MAX_IMPORTANCE, "6. Categorized error, importance = ~w", [?MAX_IMPORTANCE]),
If starting the test with a general verbosity level of 50 (?STD_VERBOSITY
):
$ ct_run -verbosity 50
the following is printed:
1. Standard IO, importance = 50
2. Uncategorized, importance = 50
3. Categorized info, importance = 50
5. Categorized error, importance = 75
6. Categorized error, importance = 99
If starting the test with:
$ ct_run -verbosity 1 and info 75
the following is printed:
3. Categorized info, importance = 50
4. Categorized info, importance = 25
6. Categorized error, importance = 99
Note that the category argument is not required in order to only specify the importance of a printout. Example:
ct:pal(?LOW_IMPORTANCE, "Info report: ~p", [Info])
Or perhaps in combination with constants:
-define(INFO, ?LOW_IMPORTANCE).
-define(ERROR, ?HI_IMPORTANCE).
ct:log(?INFO, "Info report: ~p", [Info])
ct:pal(?ERROR, "Error report: ~p", [Error])
The functions ct:set_verbosity/2
and ct:get_verbosity/1
may be used to
modify and read verbosity levels during test execution.
The arguments Format
and FormatArgs
in ct:log/print/pal
are always passed
on to the STDLIB function io:format/3
(For details, see the io
manual
page).
ct:pal/4
and ct:log/5
add headers to strings being printed to the log file.
The strings are also wrapped in div tags with a CSS class attribute, so that
stylesheet formatting can be applied. To disable this feature for a printout
(i.e. to get a result similar to using io:format/2
), call ct:log/5
with the
no_css
option.
How categories can be mapped to CSS tags is documented in section HTML Style Sheets in section Running Tests and Analyzing Results.
Common Test will escape special HTML characters (<, > and &) in printouts to the
log file made with ct:pal/4
and io:format/2
. In order to print strings with
HTML tags to the log, use the ct:log/3,4,5
function. The character escaping
feature is per default disabled for ct:log/3,4,5
but can be enabled with the
esc_chars
option in the Opts
list, see ct:log/3,4,5
.
If the character escaping feature needs to be disabled (typically for backwards
compatibility reasons), use the ct_run
start flag -no_esc_chars
, or the
ct:run_test/1
start option {esc_chars,Bool}
(this start option is also
supported in test specifications).
For more information about log files, see section Log Files in section Running Tests and Analyzing Results.
Illegal Dependencies
Even though it is highly efficient to write test suites with the Common Test
framework, mistakes can be made, mainly because of illegal dependencies. Some of
the more frequent mistakes from our own experience with running the Erlang/OTP
test suites follows:
Depending on current directory, and writing there:
This is a common error in test suites. It is assumed that the current directory is the same as the author used as current directory when the test case was developed. Many test cases even try to write scratch files to this directory. Instead
data_dir
andpriv_dir
are to be used to locate data and for writing scratch files.Depending on execution order:
During development of test suites, make no assumptions on the execution order of the test cases or suites. For example, a test case must not assume that a server it depends on is already started by a previous test case. Reasons for this follows:
- The user/operator can specify the order at will, and maybe a different execution order is sometimes more relevant or efficient.
- If the user specifies a whole directory of test suites for the test, the execution order of the suites depends on how the files are listed by the operating system, which varies between systems.
- If a user wants to run only a subset of a test suite, there is no way one test case could successfully depend on another.
Depending on Unix:
Running Unix commands through
os:cmd
are likely not to work on non-Unix platforms.Nested test cases:
Starting a test case from another not only tests the same thing twice, but also makes it harder to follow what is being tested. Also, if the called test case fails for some reason, so do the caller. This way, one error gives cause to several error reports, which is to be avoided.
Functionality common for many test case functions can be implemented in common help functions. If these functions are useful for test cases across suites, put the help functions into common help modules.
Failure to crash or exit when things go wrong:
Making requests without checking that the return value indicates success can be OK if the test case fails later, but it is never acceptable just to print an error message (into the log file) and return successfully. Such test cases do harm, as they create a false sense of security when overviewing the test results.
Messing up for subsequent test cases:
Test cases are to restore as much of the execution environment as possible, so that subsequent test cases do not crash because of their execution order. The function
end_per_testcase
is suitable for this.