Let's start with an example
step1-frameworks: Shapes3D-Point-more.t
Start with some example code and unit tests to go with it.
Don't focus a lot on the code. The Point
class is mostly
trivial, but allows for some obvious tests. The test file does pretty
simple testing of the class.
?
Audience participation time.
Some of the ideas I can think of. There are more reasons, and not all apply to every project or team.
1..2
ok 1 - successful test
not ok 2 - failed test
# comment - for diagnostics and such
TAP: Test Anything Protocol. Simple to parse and understand. Trivial to generate.
Designed to require minimal effort to work with. Originally pre-dated any of the modules that we currently use.
Test::More
Test::Class
Test::Spec
A large part of writing good tests is organizing them and making them easy to understand. A test that fails in a non-obvious way can be worse than no test at all.
Test::More
use Test::More 'no_plan'; # tests => 6;
ok( 1 == 1, 'Simple boolean test' );
is( 2/2, 1, 'Are the parameters eq' );
like( 'String', qr/ring/, 'regex match' );
is_deeply( $hash, $expected, 'test complicated structures' );
cmp_ok( 2/2, '==', 1, 'More precise comparison' );
diag( 'Need to explain something' );
note( 'Need to explain something' );
pass( 'Unconditional success' );
fail( 'Unconditional failure' );
Some of the basic tools supported by Test::More
. There
are more. Despite that, these handle many of your necessary cases.
Test::Class
Test::Class->runtests;
Look at Shapes3D-Point-class.t in step1.
Test::Spec
describe 'name' => sub {};
context 'name' => sub {};
it 'name' => sub {};
before each => sub {};
before all => sub {};
after each => sub {};
after all => sub {};
Look at Shapes3D-Point-spec.t in step1.
These are some useful concepts to help you decide what and how to test. They are definitely not enough on their own.
Assertions are just functions that test a condition.
Remember the defintion of TAP above. Make whatever conditional logic
you want and use pass()
and fail()
to report.
Why not write our own?
step2-helper: Shapes3D-Point-helper.t
step2-better-point:
Here's some example code that shows a helper library and how to use it. Don't make your helpers too complicated. They should be mostly obvious to someone who knows your domain.
Errors are usually not the normal behavior of the code. Which makes it very important to verify that functionality. If an error case is triggered, you really want your error reporting/handling to help you, not make matters worse.
It may not be reasonable to test every error condition. You should at least get the really important or common ones.
Making certain your validation works is a good use for data driven tests. I have also found them useful for transformations with boundary conditions.
Keep the number of tests in a file reasonable.
Group related tests
Your tests should be organized to help future-you (and your team) find problems and understand the use of your code.
Choose what works. Different styles work with different projects. There is no one, true way in software.
Tests are code. So use the tools and skills your already know to write your tests.
On the other hand, tests should be easier to understand and read than normal code. If you have to puzzle out how a test is working, you have already failed.
On the gripping hand, tests are also documentation. They demonstrate how your code should be used. So make sure they are good exmaples.
Some of the references I used.
is_deeply( $complex_structure, $expected, 'The foo should be constructed' )
or note( explain $complex_structure );
Every assertion returns a true value on success and a false value on failure. This allows you to execute more code conditionally as you run the test. Used carefully, this can provide a lot of help to a future maintainer.
The explain()
function is like Data::Dumper
.
Remember note()
only prints in verbose mode. Together, these
pieces allow you to communicate with a future maintainer.
Leave troubleshooting help as comments, if necessary.
In a previous life, we had a test that depended on something we did not have control over and failed intermittently. Until we figure out a way to make it work consistently. We had a comment explaining the problem and suggesting that the test be re-run. Most of the time, that next run would work.
You'll need to decide what the one thing is.
Your expected value should be a literal, not calculated
Code that's not covered by tests may be untested. Tests must be maintained and could have bugs. There comes a point where the cost of the tests is higher than the benefit you gain by having them.
If a test is not pulling its weight, kill it.
TODO tests allow you to prepare for code that doesn't work yet.
While this can lead to YAGNI-violations in your tests, used judiciously, it can help make certain you don't forget to implement something.
SKIP supports tests that are conditional on the environment.
I'm using environment very loosely here.