No subject


Fri Feb 3 01:49:57 CST 2006


surprising bugs, less time to spend wandering in code to find out which
of your call in your code does something wrong, and so on)

>  In my ideal world, I'd love for there to be unit tests for everything.=
  I'm=20
>just not sure what level we can really achieve.  If we create/enforce dr=
aconian=20
>policies, I think the more likely scenario is people just stop writing n=
ew code,=20
>and not that they write unit tests (and/or they fork the code to a versi=
on such=20
>that they don't have to write unit tests).
>
>  I'm certainly not saying unit tests are a bad thing.  But I think we h=
ave to=20
>keep in mind the people who are likely to write the tests.
>
>  All that said, quick thoughts:
>
>1) Basic function/API tests should be easy to write in most cases (if I =
call=20
>xyz(a,b) does it do the right thing).  Those shouldn't be much an issue.
>
>2) A lot of the code operates on extensive external state (map data, arc=
hetypes,=20
>etc).  So to test the function, you need to have a map set up in the rig=
ht way=20
>(right objects on the map, etc).
>
>  In this case, you want something that makes such a test easy.  What ma=
y be=20
>easiest is in fact just a large set of test maps, and the unit test coul=
d=20
>effectively hard code those, with known coordinates (test object being o=
nly=20
>object at 0,0 and I know I want to insert that at 5,5)
> =20
>
That's the idea. If you want to test map behaviour X (like, let's say,
the 'connected' behaviour in code), you write a map (with a few
connected object), and write a test that check specific object status
(the connection) after loading. Of course, as part of unit testing
framework you need to write some helper methods like
init_test_using_map(path_to_map).

So along with the framework we need to provide a toolkit that will help
writting testcase in specific area (eg, a network toolkit, map based
test toolkit, a server initialisation toolkit and so on, that's the part
that will take much of the time, but it will also be written only once)

>  My thinking here is that if I can write the basic test to reproduce th=
e bug in=20
><10 minutes, I'll probably write the test.  IF the framework is such tha=
t it=20
>will take me an hour, then I probably wouldn't.
> =20
>
If it take an hour to write a testcase for a specific bug that mean
either it took you this time to find in which conditions a bug arise (i
could even argue in some cases here writting a unit test helped us find
those conditions faster) or the framework is so difficult to use that
noone can use it (which is not the case for most framework). basically a
unittest is just this

- init needed server code (with toolkits, take 2 or 3 lines)
- do some methods calls
- check_for_some_condition(boolean condition, char*
error_message_if_condition_not_met)
- do some methods calls
- check_for_some_condition(boolean condition, char*
error_message_if_condition_not_met)
- do some methods calls
- check_for_some_condition(boolean condition, char*
error_message_if_condition_not_met)
- do some methods calls
- check_for_some_condition(boolean condition, char*
error_message_if_condition_not_met)
- ...
- test finished

The main and the fixture creating code can nearly be cut and paste from
any existing unit test.

>3) If the number of tests grow, it is probably desirable to be able to r=
un some=20
>specific subset of tests, eg, I don't want to have to wait 15 minutes fo=
r the=20
>server to run all the tests when I'm trying to fix some specific bug - I=
 just=20
>want it to run my test.  After all is done, then perhaps running all tes=
ts=20
>before commit makes sense.
>
> =20
>
Of course, you should be able to run a specific fixture, not all
fixtures. Here at work, running all unit tests can take a few hours
(well that's mainly because we also use a code test coverage tool which
run all unit tests in debug mode). When we are doing changes, we only
test them on a specific fixture or set of fixture, we do not run all
testcase. Running all test cases is done at night from a cvs checkout an
we get the report online. This way we can keep an eye on what's wrong.

It is a good thing, for example, to run all those tests before a release.

>_______________________________________________
>crossfire mailing list
>crossfire at metalforge.org
>http://mailman.metalforge.org/mailman/listinfo/crossfire
>
> =20
>




More information about the crossfire mailing list