Test Cases Tutorial
Unit testing for TWiki development. Follow-up article on
SoYouWantToBeATWikiDeveloper.
- Introduction
- What's available in TWiki
- Asserts
- Unit Tests
- Running unit tests
- Writing unit tests
- Structure of a test case
- Building test fixtures
- Writing test functions
- Repeating the same test for different environments ( verify_ functions)
- Unit::TestCase
- assert($condition, $message)
- assert_equals($expected, $actual, $message)
- assert_not_null($ref, $message)
- assert_null($ref, $message)
- assert_str_equals($expected, $actual, $message)
- assert_str_not_equals($expected, $actual, $message)
- assert_num_equals($expected, $actual, $message)
- assert_matches($expected, $actual, $message)
- assert_does_not_match($expected, $actual, $message)
- assert_deep_equals($expected, $actual, $message)
- annotate($mess)
- assert_html_equals($expected, $actual, $message)
- assert_html_matches($expected, $actual, $message)
- capture(\&function, ...) -> ($text, $result)
- TWikiTestCase
- TWikiFnTestCase
- Logging from unit tests
- General advice for new unit testers
- Automated Test Cases
- Manual Test Cases
- Setting up a Test Environment
- Testing Javascript
- Testing extensions
- Discussion
Introduction
TWiki is written in perl. In the opinion of many professional software developers, Perl is a horrendous language. It is often described as a WORN language - for "Write Once Read Never" - because once it is written, it can be incredibly difficult for someone else to read and understand that code. Sometimes even the original coder is foxed by their own code from the day before. That's OK for what Perl was designed for - quick hack scripts - but when it is used to build a large and complex system like TWiki, there needs to be a lot of discipline and support to make the code maintainable. Test cases is one discipline we use to make it easier for other people to maintain our code. As well as helping prevent bugs in the code, well written tests can help reveal the intention behind the code. As long as the tests pass, they represent an inarguable specification of what the code is supposed to do, much more reliably than any documentation ever can.
What's that you said? You were out in the wilderness on a camping trip, and a voice spoke to you out of a burning bush? Now you have seen the light, and you want to write some tests for TWiki? Good for you! But where do you start? This article is intended to give you an introduction to TWiki testing methods, and discuss some of the finer points of writing unit tests and topic test-cases.
What's available in TWiki
The TWiki codebase has support for internal integrity checking (asserts), two types of automated test, and a methodology for manual tests.
An
assert is a section of code that is disabled in production releases, but developers can enable these lines to check for error conditions in the code.
A
unit test is a perl program that usually focuses on testing a single API or object in the core.
- Unit tests may test individual modules or APIs in the code, or may spoof an entire user transaction, checking response at each step.
- Unit tests run without a browser.
An
automated test case is a topic in the TestCases web. The TestCases web is available from a subversion checkout area that is configured as a running TWiki. These topics are simple "stimulus-response" tests, designed for testing
TML and other more browser-oriented features of TWiki.
- A testcase is a set of "actual" blocks which contain source TML, each of which has a corresponding "expected" block that contains the HTML expected after post-processing,
- The TestFixturePlugin (in subversion only) compares the actual result of formatting against the expected result to give a pass/fail,
- Run from a browser,
- You will usually switch between manual inspection and automatic runs of the testcase, so the TestCases web has a bunch of infrastructure to support this.
A
manual test case is a topic in the TestCases web that documents a series of steps to be followed by a tester, and describes the expected outcome.
In general,
automated test cases are preferred to
manual test cases, and
unit tests are preferred to both.
Asserts are used as part of normal programming practice, irrespective of the other test methods being used.
You really ought to read this whole topic; but if you are in a hurry and just want to get a test environment set up (e.g. you are extending a plugin and want to run the unit tests) then you can jump straight to
Setting up a Test Environment.
Note: the examples and setup descriptions below are written for Linux, but the test environment also runs under other shells and perl versions, such as Active State perl on Windows.
Asserts
Asserts are enabled by setting the environment variable
TWIKI_ASSERTS
to a non-zero value. This is done automatically during unit tests, but for all other types of test you should edit
LocalLib.cfg
and add the line
$ENV{TWIKI_ASSERTS}=1;
to the top of the file. This will slow TWiki down very slightly as it has to execute the tests, so don't benchmark with asserts enabled.
Asserts are implemented using the
Assert
module. This module defines the function ASSERT which is used thus:
use Assert;
sub do_something {
ASSERT($i>0) if DEBUG;
...
This will cause an exception to be thrown if
$i ≤ 0
when
do_something
is called. The
if DEBUG
is required (it is used for conditional compilation).
Asserts should be used whenever a boundary condition needs to be verified before allowing TWiki to continue. For example, they can be used to check the parameters to functions, or check that the results of a computation are in-range.
Asserts are not a substitute for good testcase practice - they are merely a handy sanity check for programmers.
Unit Tests
Unit tests
for the TWiki core are kept in Subversion in the UnitTestContrib extension, in the
test/unit
.
Extensions such as plugins keep their unit tests in subdirectories below
test/unit
e.g.
test/unit/FuncUsersContrib
(this is to avoid accidentally overwriting, or otherwise confusing, the system tests).
Unit tests use a simple custom test framework, in
core/lib/Unit
. This framework is inspired by JUnit, the famous Java unit testing framework, and provides extensive support for writing object-oriented test programs.
Test packages are divided into
Test Suites and
Test Cases.
Test Suites are just collections of
Test Cases and other
Test Suites.
Test Cases are collections of
Test Functions that usually share some common
Test Fixtures. A test fixture is the generic term for the environment required to
run a test - for example, a test fixture may comprise a set of configuration settings, TWiki instance, and a set of webs and topics.
Most core test cases test a single class, or group of classes that are closely inter-related (a
subsystem). Others may test externally visible components, such as the
CGI scripts. So for example, there is a
PrefsTest
for testing everything to do with preferences,
StoreTests
for testing the store, and
SaveScriptTests
for testing the
save
script. The unit tests are collected into a single
Test Suite, called
TWikiSuite
.
Extensions, such as Plugins, typically have a single test case that performs all the tests for that extension. Some larger extensions have suites of test cases, organised similarly to the core tests (e.g.
ActionTrackerPlugin)
Running unit tests
First set up a
test environment.
There are two ways to run tests; you can either run individual tests / test suites (recommended for the core), or you can use the
test
target in
BuildContrib (recommended for extensions) For example, to run all the core unit tests for TWiki, you just
$ cd test/unit
$ perl ../bin/TestRunner.pl TWikiSuite.pm
You can also run individual test cases to focus in on a failure.
$ cd test/unit
$ perl ../bin/TestRunner.pl RegisterTests.pm
To run the unit tests for an extension built using BuildContrib - in this case the
CommentPlugin
$ cd CommentPlugin/lib/TWiki/Plugins/CommentPlugin
$ perl build.pl test
If the test fails, a useful tip is to copy-paste the command line for the test run and run it outside the build script.
Sometimes when a test fails (or is interrupted) it may leave parts of the test fixture lying around. It does this so you can debug what went wrong. However the next test run may refuse to run if it detects parts of the fixture still in place. In this case you can pass the
-clean
option to
TestRunner.pl
to force it to clean up the previous test run before it starts.
To run tests for an extension, you are best to use the
BuildContrib (which must be installed in the TWiki installation pointed at by the environment variable
TWIKI_HOME
). For example,
$ cd FuncUsersContrib/lib/TWiki/Contrib/FuncUsersContrib
$ perl build.pl test
If you want to run the tests without using the build script, you need to set up a rather convoluted
@INC
path. The easiest thing to do is to
perl build.pl
once, and then copy-paste the command line it prints out. This allows you to focus in on different test files, and can help home in on a failing test.
See the
BuildContrib and
BuildContribCookbook pages for more information.
Writing unit tests
In brief, tests are organised into
test cases, which are collections of
test functions which share common setup requirements, or test similar things. Test cases are in turn collected into
test suites - for example,
TWikiSuite.pm
.
Structure of a test case
A test case is a collection of test functions that usually share some common
test fixtures. A test fixture is a bunch of code that creates a private, controlled, environment that a test can run within.
As a simple example here we will use
VariableTests
, the test case that checks the function of many common TWiki Variables. The following code comes from an old version of
UnitTestContrib/test/unit/VariableTests.pm
:
A test case is a Perl class, and for TWiki it is a subclass of
TWikiTestCase
, so a testcase always starts with:
package VariableTests;
use base qw( TWikiTestCase ); # This base class sets up the basic test fixture
use strict;
sub new {
my $self = shift()->SUPER::new(@_);
return $self;
}
This is followed by the two functions
set_up
and
tear_down
. To ensure that the test fixture is "clean" before each individual test function runs, the entire fixture is built up before running each test function, and then torn down afterwards. Given that, though test functions are run in the alphabetical order, one test function cannot not depend on the results of another for the most part.
Building test fixtures
set_up
is used to build the fixtures, and
tear_down
is used to remove them again once the test is complete. TWiki unit tests are
usually based on one of two standard base classes, the basic
TWikiTestCase
, or (most commonly)
TWikiFnTestCase
, which is derived from it. These are described in more detail below, but first, let's continue to set up a test fixture based on TWikiTestCase so you can see how it's done:
# Constants used in this test case
my $testWeb = 'TemporaryTestWeb'; # name of the test web
my $testTopic = 'TestTopic'; # name of a topic
my $testUsersWeb = 'TemporaryTestUsersUsersWeb'; # Name of a %USERSWEB% for our test users
my $twiki; # TWiki instance
sub set_up {
my $this = shift; # the Test::Unit::TestCase object
Now invoke the superclass setup. It is very important that this is called
first, as it saves the
$TWiki::cfg
configuration (which comes from
LocalSite.cfg
) before we start tailoring it for this test case. It also redirects the log files to files in the test directory.
$this->SUPER::set_up();
Configure
$TWiki::cfg
with appropriate setup for this test. Do not use local paths, and make sure you configure
everything that might affect the test results. In this case, some of our test functions are going to use user data, so we will have to create some known fake users. That means we have to configure the password manager and protect all the places that password manager uses to store user info.
$TWiki::cfg{UsersWebName} = $testUsersWeb;
$TWiki::cfg{MapUserToWikiName} = 1;
$TWiki::cfg{Htpasswd}{FileName} = '/tmp/junkpasswd';
$TWiki::cfg{PasswordManager} = 'TWiki::Users::HtPasswdUser';
Now fake a simple query and create a TWiki instance and some test webs.
# Make up a simple query
my $query = new Unit::Request("");
$query->path_info("/$testWeb/$testTopic");
my $response = new Unit::Response();
$response->charset("utf8");
# Create a TWiki instance
$twiki = new TWiki(undef, $query);
# and use it to create some test webs
$twiki->{store}->createWeb( $twiki->{user}, $testWeb );
$twiki->{store}->createWeb( $twiki->{user}, $testUsersWeb );
}
Now,
tear_down
is responsible for
cleaning up after the test has run, so has to restore the state to how it was before
set_up
was first called:
sub tear_down {
my $this = shift; # the Test::Unit::TestCase object
# This will erase the test webs
$this->removeWebFixture( $twiki, $testWeb );
$this->removeWebFixture( $twiki, $testUsersWeb );
# This will destroy the TWiki instance. We use eval to suppress errors
eval { $twiki->finish() };
# This will automatically restore the state of $TWiki::cfg
$this->SUPER::tear_down();
}
Writing test functions
Now we are ready to write some
test functions. Test functions are simply functions in the package that start with 'test'. As described above, each test function is run after
set_up
, and before
tear_down
, so we know that the individual functions can change anything in the TWiki environment
as long as it was protected by set_up
,
and will be restored by tear_down
. So we can do what we like in the
$testUsersWeb
web, because we created our own version of it. But we must not
under any circumstances write to the TWiki web, or any other web that we didn't create in
set_up
.
This test case we are writing is testing some TWiki variables, one of which is
%SCRIPTURL%
. We want to test this variable with a range of parameters. So, let's write a test function for it.
sub test_SCRIPTURL {
my $this = shift; # this is an instance of Test::Unit::TestCase; see the online docs for more help
# We can munge $TWiki::cfg safely, because it will be restored in tear_down
$TWiki::cfg{ScriptUrlPaths}{snarf} = "sausages";
undef $TWiki::cfg{ScriptUrlPaths}{view};
$TWiki::cfg{ScriptSuffix} = ".dot";
my $result = $twiki->handleCommonTags("%SCRIPTURL%", $testWeb, $testTopic);
$this->assert_str_equals(
"$TWiki::cfg{DefaultUrlHost}$TWiki::cfg{ScriptUrlPath}", $result);
$result = $twiki->handleCommonTags(
"%SCRIPTURLPATH{view}%", $testWeb, $testTopic);
$this->assert_str_equals("$TWiki::cfg{ScriptUrlPath}/view.dot", $result);
$result = $twiki->handleCommonTags(
"%SCRIPTURLPATH{snarf}%", $testWeb, $testTopic);
$this->assert_str_equals("sausages", $result);
}
Rinse, and repeat for everything else you want to test!
There are a wide variety of test cases, both system tests and extension tests. Some are cleaner than others. The quickest way to get a new testcase up and running is usually to cut-and-paste an existing testcase that does something similar.
Some simple rules for writing test functions:
- never produce output on the terminal (except when debugging)
- never make a test function wait on user input
- always build fixtures / set up the configuration for every aspect of the thing you are testing. If your code only works because you happened to have the right setting in LocalSite.cfg, you will regret it later.
- never rely on the person running a test to "check by eye". They won't.
- avoid using TWiki APIs to build test fixtures that are "higher level" than the ones you are testing. There are no hard and fast rules for what "higher level" means, but in general, avoid using an API if there is a chance that it in turn relies on the functionality you are trying to test.
- watch out for caches many TWiki classes cache data, and it can banjax your tests if you are not careful.
- Test functions are executed in the alphabetical order. But each being independently executed, you should not rely on the order.
Repeating the same test for different environments ( verify_
functions)
A very common requirements is a test that has to be repeated for a number of different environments. For example, say we want to test
TWiki::Func::getViewUrl
for a number of combinations of different values of
$TWiki::cfg{ScriptUrlPath}
and
$TWiki::cfg{ScriptSuffix}
. We could manually write tests as follows:
sub run_test {
my $this = shift;
# Do some tests
}
sub test_1 {
my $this = shift;
$TWiki::cfg{ScriptUrlPath} = 'http://test.one';
$TWiki::cfg{ScriptSuffix} = '';
run_test();
}
sub test_2 {
my $this = shift;
$TWiki::cfg{ScriptUrlPath} = 'http://test.one';
$TWiki::cfg{ScriptSuffix} = '.pl';
run_test();
}
sub test_3 {
my $this = shift;
$TWiki::cfg{ScriptUrlPath} = 'http://test.two';
$TWiki::cfg{ScriptSuffix} = '';
run_test();
}
sub test_4 {
my $this = shift;
$TWiki::cfg{ScriptUrlPath} = 'http://test.two';
$TWiki::cfg{ScriptSuffix} = '.pl';
run_test();
}
This works fine but rapidly becomes tiresome when you have a large number of different combinations. So
Unit::TestCase
provides support for
fixture groups. This function lets you define combinations of environment setup functions that are then used by the framework to generate the actual test functions from template test functions.
This is most easily seen by recoding the above example to use fixture groups:
sub sup_1 { $TWiki::cfg{ScriptUrlPath} = 'http://test.one'; }
sub sup_2 { $TWiki::cfg{ScriptUrlPath} = 'http://test.two'; }
sub ss_1 { $TWiki::cfg{ScriptSuffix} = ''; }
sub ss_2 { $TWiki::cfg{ScriptSuffix} = '.pl'; }
# Implement this to return an array of arrays, each of which is a list
# of the names of fixture setup functions.
sub fixture_groups {
return ( [ sup_1, sup_2 ], [ ss_1, ss_2 ] );
}
sub verify_it_works {
my $this = shift; # This is the testcase object, just like in a test_ function
# This is the analog of run_tests in the previous example
# Do some tests
}
This will result in the following test functions being generated automatically:
verify_it_works_sup_1_ss_1
verify_it_works_sup_1_ss_2
verify_it_works_sup_2_ss_1
verify_it_works_sup_2_ss_2
So
verify_it_works
is called for all combinations of settings of
ScriptUrlPath
and
ScriptSuffix
. You can have as many
verify_
functions as you want in a single testcase module, but you can only have one
fixture_groups
function.
Unit::TestCase
This is the base class of all unit test classes. It is abstract, almost entirely TWiki-independent, and provides some basic functionality used in testing.
assert($condition, $message)
Tests if
$condition
is boolean true (using perl's definition of 'true')
assert_equals($expected, $actual, $message)
Tests equivalence using
eq
. Use
assert_str_equals
and
assert_num_equals
in preference to this method (it is just provided for compatibility with
CPAN:Test::Unit)
assert_not_null($ref, $message)
Tests if
$ref
is defined or not.
assert_null($ref, $message)
Tests if
$ref
is defined or not.
assert_str_equals($expected, $actual, $message)
Tests two strings for equality using
eq
.
assert_str_not_equals($expected, $actual, $message)
Tests two strings for inequality using
ne
.
assert_num_equals($expected, $actual, $message)
Tests two numbers for equality using ==.
assert_matches($expected, $actual, $message)
Tests if
$actual =~ /$expected/
assert_does_not_match($expected, $actual, $message)
Tests if
$actual !~ /$expected/
assert_deep_equals($expected, $actual, $message)
Tests equality of two hierarchical data structures.
annotate($mess)
Generates
$mess
in the test log.
assert_html_equals($expected, $actual, $message)
Does a 1:1
HTML comparison. Correctly compares attributes in tags. Uses
HTML::Parser which is tolerant of unbalanced tags, so the actual may have unbalanced tags which will
not be detected. Use in test functions.
assert_html_matches($expected, $actual, $message)
Tries to match a block of
HTML in a larger page of
HTML.
$expected
must be a well-formed block of
HTML.
capture(\&function, ...) -> ($text, $result)
Invokes a function while grabbing stdout, so the "http response" doesn't flood the console that you're running the unit test from, and you can analyse the result in your test function.
...
params get passed on to
&function
. Use in test functions.
TWikiTestCase
As described above, TWikiTestCase is a simple base class you can use for generating new tests. All it does it to define a
set_up
function that manipulates
$TWiki::cfg
to set up the test environment. Beyond that the tester is responsible for creating webs, topics etc. Test packages that inherit from TWikiTestCase can safely modify
$TWiki::cfg
in individual tests, and the original will be restored when the test completes (or fails). TWikiTestCase also provides:
removeWebFixture($twiki, $web)
Cleanly removes a web. Short for:
$twiki->{store}->removeWeb($twiki->{user}, $web)
, but also traps and ignores errors. Use from
tear_down
.
TWikiFnTestCase
This is a class derived from TWikiTestCase, which therefore picks up all the functionality of that class and of
Unit::TestCase
. It adds some more useful TWikiness.
- It predefines
$this->{test_web}
(the name of a temporary test web), $this->{test_topic}
(a test topic), $this->{users_web}
(a user web), $this->{twiki}
(a TWiki instance), $this->{request}
(the request object used to create $this->{twiki}
) and $this->{response}
(the response object to be used by $this->{twiki}
)
- It sets up TWiki to use the default password and user mapping managers, and registers a test user (username 'scum', wikiname 'ScumBag')
- It creates the test web and topic, and the users web.
It also adds the following function:
registerUser($loginname, $forename, $surname, $email)
This function uses the standard TWiki registration code to register a new user.
Logging from unit tests
There are two different kinds of logging available from unit tests; logging the progress of the actual test run, and TWiki log files.
Logging the progress of the test run
You can log the test run to a file using the
-log
option on the
TestRunner
module. If this option is given a log of the test run will be written to the current directory in a file named for the time and date of the test run, with the extension
.log
.
$ perl ../bin/TestRunner.pl -log RegisterTests.pm
TWiki log files
By default unit tests do not leave any traces in TWiki's logging files
warnYYYYMM.txt
nor
logYYYYMM.txt
. If you always need logging for your particular test cases, you can set
$TWiki::cfg{LogFileName}
or
$TWiki::cfg{WarningFileName}
in the
set_up
function of your test package to the desired values. If you want to keep the logs for just one single run, set the environment variable
TWIKI_DEBUG_KEEP
to a true value. In this case the warning and log files will be kept in a temporary directory. Example:
$ export TWIKI_DEBUG_KEEP=1
$ cd test/unit
$ perl ../bin/TestRunner.pl RegisterTests.pm
.........
$ find /tmp/ -name "TWikiTestCase.*" -print
/tmp/7vq2n9yDWT/TWikiTestCase.warn
/tmp/7vq2n9yDWT/TWikiTestCase.log
# ... examine the log and warn file as you need, then delete the files
$ rm -rf /tmp/7vq2n9yDWT
Testing Plugins with TWikiRelease04x03 and older
TWiki5 was modified in order to work with execution mechanisms other than
CGI. That means that the
$query
object used is from
TWiki::Request
class and not
CGI
. Also,
TWiki::Response
was introduced. Since it should be possible to test a plugin with
trunk
and older releases,
Unit::Request
and
Unit::Response
were introduced. They adjust their behavior according to the TWiki release being tested. So, to test a plugin with older releases:
-
svn co http://svn.twiki.org/svn/branches/TWikiRelease04x03
-
cd TWikiRelease04x03/twikiplugins
-
rm -rf UnitTestContrib
-
rm -rf MyPlugin
-
svn co http://svn.twiki.org/svn/trunk/UnitTestContrib
-
cd ..
-
perl pseudo-install.pl default
-
perl pseudo-install.pl UnitTestContrib
-
perl pseudo-install.pl MyPlugin
-
cd test/unit
-
perl ../bin/TestRunner.pl MyPlugin/MyPluginSuite.pm
Note: this way, core tests are not expected to work. Only plugins.
General advice for new unit testers
Managing test webs and test topics
A common requirement when testing TWiki is to create test topics. There are several ways to do this, and you will see the core unit tests using some very low-level functions to create test data. This is because the core test suite is built up so that each level of testing tests as few core packages as possible, to avoid excessive dependencies between packages. However
if you are testing a module that depends on a working core - for example an extension - the best advice is to use the functions from the
TWiki::Func
package. As long as you inherit from
TWikiFnTestCase
and create test topics in
$this->{test_web}
then the fixtures will be cleaned up for you when the test completes.
Writing unit tests for extensions
To write a unit test suite for an extension, proceed as follows:
- Create a
test/unit/ExtensionName
directory under the root of your extension checkout area
- Copy
EmptyPlugin/test/unit/EmptyPlugin/EmptyPluginSuite.pm
to test/unit/ExtensionName/ExtensionNameSuite.pm
- Copy
EmptyPlugin/test/unit/EmptyPlugin/EmptyPluginTests.pm
to test/unit/ExtensionName/ExtensionNameTests.pm
- Edit
test/unit/ExtensionName/ExtensionNameSuite.pm
and s/EmptyPlugin/ExtensionName/g
- Edit
test/unit/ExtensionName/ExtensionNameTests.pm
and
-
s/EmptyPlugin/ExtensionName/g
- Change the base class to
TWikiFnTestCase
- Add your tests (
test_self
is an example test function)
Some extensions that have good examples of unit tests are:
CommentPlugin,
ActionTrackerPlugin,
WysiwygPlugin. Use these for reference.
Automated Test Cases
An automated testcase is a TWiki topic that contains a set of "actual" blocks which contain source
TML, each of which has a corresponding "expected" block that contains the
HTML expected after post-processing (also known as the "golden"
HTML). The
TestFixturePlugin compares the actual result of formatting against the expected result to give a pass/fail.
An automated testcase is any topic in the TestCases web named "TestCaseAuto...". In your testcase topic, enter the golden
HTML surrounded by structured
HTML comments:
<!-- expected -->
...your golden HTML...
<!-- /expected -->
The golden
HTML should be what you expect to be rendered in the final output.
expected
has a number of options that are specified by words after
expected
in the tag - for example,
<!-- expected again expand rex -->
expand |
Enables expansion of %variables% ( TWiki::Func::expandCommonVariables ). Normally you should not use the expand option. It is intended primarily for expanding TWiki variables in URL components, and is used when testing generated HTML which is specific to the installation. It should be used with extreme caution as it assumes that TWiki doesn't do anything naughty during this expansion. |
rex |
If there is text which you know can never be literally matched - for example, a generated time - you can enter a regular expression to match it instead, if the rex option is enabled. For example an RE for a time is entered this way: @REX(\d\d:\d\d) . Be very careful about using greedy matches. A number of preprogrammed REs, viz. @DATE , @TIME and @WIKINAME , are also provided to simplify expected code. |
again |
If you have two tests with the same expected text one after the other, you can re-use the expected text from the previous test using this option. The expected text will then be set to the text expected for the previous test. Remember you may need to repeat the expand and rex options again as well. |
Anything else you put into an
expected
tag will be output if there are any test failures, so you can add random text to help identify which
expected
block failed - for example
<!-- expected TESTEYESIGHT -->
You specify your actual test markup in the same way:
<!-- actual -->
<!-- /actual -->
Some notes about the comparison process:
- The comparison is performed by CPAN:HTML::Diff, which compares the HTML structures found in the text. See the documentation on CPAN:HTML::Diff for help.
- whitespace is ignored where it has no impact on the way the HTML is rendered.
- The comparison is insensitive to the order of parameters to the tags, but all parameters must be present.
- All HTML entities are normalised to &#dd; style decimal entities before comparison, so < will match <
- The actual text is read from the raw source of the topic. No processing is done on it (except as described under
expand
and rex
, above)
- The comparison is done on the <body> of the topic only. At present there is no way to compare the
<head>
.
-
expected
and actual
blocks are matched up in the order they occur;
- If an
actual
marker is left open in the text ( has no matching /actual
), all text up to the end of the topic will be taken as part of the test. This allows for testing markup at the end of topics.
- If a
/actual
tag occurs before a actual
tag, all text from the start of the topic up to that tag is taken as the actual
text. This allows for testing markup at the start of topics.
-
actual
and expected
blocks can occur in any order, but there must be one actual
for each expected
.
- If there are differences, the report will indicate which
actual
/ expected
pair the difference was found in. The pairs are numbered from the start of the topic (number 1).
If possible, always write
unit tests in preference to automated testcases. Unit tests are much faster, and usually require a lot less human interaction to run (so will be run more often).
Manual Test Cases
Manual test cases are simply scripts of steps to be followed to test a feature.
They are not recommended and should be used only as a last resort.
Setting up a Test Environment
This section is a step-by-step guide to setting up a test environment suitable for running unit tests and automated test cases.
- Prerequisites
- Unit tests must always be run using an unprivileged user. Do not run them as superuser, as some of the tests rely on being able to protect files against attempts by the same process to write files.
- Normally web servers are run using a user like
apache:apache
or www-data:www-data
and it's usually not possible to run tests as this user, as they have access controls that would prevent the tests from running.
- You are recommended instead to run the tests as a user who is in the same group as the webserver user, and adjust the permissions in the TWiki tree so that members of the same group are granted the same access as the main user.
- For example, to add user
fred
to group apache
on Linux, use usermod -a -G apache fred
- Then set
g
permissions to be the same as u
permissions on all files in the TWiki tree. There is a script in SettingFileAccessRightsLinuxUnix that can help you do this; simply modify the permissions in all the chmod
commands e.g. change 0644
to 0664
.
- You are also recommended to change your
umask
if necessary so that the webserver user can access the files you create while testing, though this is optional.
- On Mac OS X you may run unit tests by writing
sudo -u www perl ../bin/TestRunner.pl TWikiSuite.pm
, so creating a new user is not necessary.
- You need a number of CPAN modules to run the tests. Because the requirements change as people add tests, we can't give a comprehensive list here. The best advice is to run the tests, and watch what it can't find. Then install it from CPAN using (for example)
perl -MCPAN -e 'install HTML::Parser'
- Checkout the subversion branch (e.g.
trunk
) you are working on (see SubversionReadme) and configure it so it is a running TWiki. This usually involves following the installation steps up to running configure
, then running configure
once to set the paths. You shouldn't need to do any more than that.
- If you created your web server configuration from ApacheConfigGenerator, make sure to set
Options FollowSymLinks
for the pub
directory, in order to get the static files of symlinked plugins served.
- Keep as many of the default settings as you can.
- From the root of your checkout,
cd core
and use perl pseudo-install.pl default
to install the default plugins
- Use
perl pseudo-install.pl BuildContrib
to install the build environment
- Use
perl pseudo-install.pl UnitTestContrib
to install the unit tests
- Set these environment variables
-
export TWIKI_HOME=/path/to/svn/trunk
-
export TWIKI_LIBS=$TWIKI_HOME/core/lib:$TWIKI_HOME/core/lib/CPAN/lib
- If you are developing a plugin or contrib (e.g. PutaPlugin) using BuildContrib, then:
-
perl pseudo-install.pl -link PutaPlugin
to install the plugin in your test environment
- Enable the plugin using
configure
. You should now be able to use the plugin in your test TWiki.
-
cd
to the plugin specific directory lib/TWiki/Plugins/PutaPlugin
-
perl build.pl test
- This will run the unit test suite in
PutaPlugin/test/unit/PutaPlugin/PutaPluginSuite.pm
- If you are developing a core feature, then
-
cd core/test/unit
-
perl ../bin/TestRunner.pl TWikiSuite.pm
to run all the tests
- if you are on Mac OS X:
sudo -u www perl ../bin/TestRunner.pl TWikiSuite.pm
(this runs the tests as user www
)
-
perl ../bin/TestRunner.pl TestcaseName.pm
to run a single testcase
- Some unit tests may require you to create the
*auth
bin scripts like viewauth
, which are are missing in SVN, but are needed as soon as one tests ApacheLogin. You can do this using soft links or copies of the non-auth scripts.
Testing Javascript
For testing Javascript you are highly recommended to investigate the
JUnitContrib
Testing extensions
TWiki extensions (plugins, skins, contribs etc) are tested in the same way as the core (using unit, automated and manual test cases) as described above. The
BuildContrib provides a lot of support for running extension testcases, and you are recommended to use it.
To avoid mixing up core and extension tests, we have adopted the convention that the unit tests for an extension are held in a subdirectory of
test/unit
. For example, the
ActionTrackerPlugin stores its tests in
test/unit/ActionTrackerPlugin
.
BuildContrib automatically looks for a test suite called
ActionTrackerPluginSuite.pm
in this directory when you run
perl build.pl test
. Unit tests are not normally included in the released package (are not listed in MANIFEST).
--
Contributors: CrawfordCurrie,
HaraldJoerg
Discussion
There is a trend that very experienced full time developers reject a small contribution unless the contributor also provides a unit test case for it.
This topic is the only resource that describes how to and I must say that it is still too brief and assumes that the developers knows quite many details of the internals of TWiki.
Especially the basic set_up and tear_down is very difficult to understand. How do I setup the rest of the environment? How do I create a test topic? How do I put test content in this topic? Where are all these stored? How do I clean them up again? What is the full list of all the configuration and test files I have to create to make a test case run?
I have been on this project for quite a while now and I still cannot understand enough to create a unit test case from scratch.
It takes much less skills to modify existing code and contribute with a small fix or enhancement than it takes to write these unit test cases. And we are in practical asking many potential contributors to "go away" with our sometimes slightly arrogant rejection of contributions without unit tests.
Those of you that understand how this works please
- Extend this tutorial to walk through a full example how to create a unit test case that creates a full test environment, creates the web and a test topic, populate the test topic, runs a simple test, and tear down the whole thing again, explaining each and every step. This topic is already starting well. It just needs to be extended with a fully working example.
- And when you reject contributions because you want unit tests, and the unit test can be an extra function in an existing test, please tell the contributor which existing test case to add their test to, and always point them to this topic. It can be difficult to find because there is so much information in Codev. And offer your help. Otherwise people just run away with a "if you don't want my contribution then fine!" TWiki is an advanced piece of code now and it takes time for a new contributor to learn it all.
--
KennethLavrsen - 26 Apr 2007
I agree about improving the doc; but as
SvenDowideit points out, I am the worst person to do that. Testing TWiki is
difficult - there is no easy answer - and I know far too much about the internals to be able to write a simple intro. Also, my test environment is extremely mature. I have hand-held a number of people through getting their environments set up, but to date they haven't bothered to feed that experience back into improved doc.
As for rejecting untested contributions; damn right, and I will continue to do so. I put a huge amount of effort into developing and maintaining the unit tests, and they are an incredibly valuable resource. They are what keeps the core honest. A contribution that doesn't include doc or tests is IMHO only 20% of a contribution, and I'm not going to provide the other 80% any more.
Your point about directing people to the correct test case to add new test functions to is well taken, however.
--
CrawfordCurrie - 26 Apr 2007
Would it be an idea for there to be example tests with the
!EmptyPlugin
when created with
create_new_extension.pl
? I want to write some tests for some of my plugins, but am finding this topic a bit difficult to follow, as it seems to drift between testing the TWiki core and testing extensions. Some examples would really be helpful.
--
AndrewRJones - 25 Jul 2007
I am (still) trying to get a test environment running and although I have not finished it completely (I have a running TWiki, but contrib development does not work as it is supposed to) I believe I found one little flaw in this doc, in
Setting up a Test Environment:
I suppose that
export TWIKI_HOME=/path/to/twiki/
should be
export TWIKI_HOME=/path/to/twiki
, without the slash at the end. If you include the slash,
$TWIKI_LIBS
will have two slashes.
Once I figure out the rest (presently mainly the problems I have with the
BuildContrib), maybe I can help out with this manual, because I am a real Newbie and have to figure it out from the scratch. So maybe a good starting point for a hopefully idiot-proof guideline
--
SebastianKlus - 18 Jul 2008
The two slashes are irrelevant. Two works just as well as one. I really look forward to your help with this little book!
--
CrawfordCurrie - 23 Jul 2008
Updated "Setting up a Test Environment" prerequisites task to reflect step by step order
--
GeorgeTrubisky - 2010-12-28
Thank you George!
--
PeterThoeny - 2010-12-31
As per
UnitTestContribTestOrder, now test_* functions are executed in the alphabetical order (more specifically 'cmp' order).
--
HideyoImazu - 2012-07-12
For my plugin, I have a .t file with unit tests written with Test::More. These are really unit tests, they don't have side effects and they don't need the complete TWiki environment to be executed. Simply the .t file should be executed, that's it. No difficult #SettingUpATestEnvironment procedures necessary. I simply would like to have that test file near the code it is testing, i.e. in a subdirectory "t" of my plugin directory in /twiki/lib/TWiki/Plugins .
--
Rüdiger Plantiko - 2013-02-04
Ruediger, this is certainly feasible for plugins that are not part of the TWiki distribution. The test environment we use is to run unit tests and automated test cases on core and all extensions we need to test.
--
Peter Thoeny - 2013-02-04