[crossfire] Unit tests

Mark Wedel mwedel at sonic.net
Wed Mar 1 00:33:32 CST 2006


Alex Schultz wrote:
> Most of the tests in that directory are small tests of a specific 
> function, but are not very robust and have no automation. However they 
> may still be useful to look at for ideas of what to test and possibly 
> base some of the test maps off of. That python one seems like a decent 
> unit test of the python plugin, but not of other functions. Personally, 
> I think the most robust way to set up a unit test system would be 
> writing it mainly in C, reading from a text based script file of what to 
> do. The high level tests would be done mainly by map files and python 
> code, but I think the unit test framework should be in C as a server 
> plugin, so then could also have it run some lower level tests of some 
> individual functions of the server much easier.

  Yes - that seems reasonable.  But at some level, higher level tests using maps 
are perhaps better - they are testing how the game works.  It also means that if 
semantics of some function change (say a new flag value that can get passed), 
such tests should catch them better than the plugin if it makes the calls 
directly - it could be the function is working as expected with the new values.

  I certainly don't think any of the test maps I wrote would be that good for 
automated testing.  They were useful for manual testing.  I'd think for 
automated testing, more definable results are desired.

  For example, you'd want to better control the results - if doing a boulder 
movement test, you'd probably want walls so the boulder can move to just one or 
2 spaces instead of eight to make testing the results easier, etc.


> Also, I was thinking that when the server starts up with the unit test 
> plugin, it should fork a new copy of the server. This way, if a segfault 
> occurs, it can core dump, fail that test, and resume with the next test. 
> It also keeps the server states between the tests more separate and 
> helps to make sure one failed test doesn't cause other tests to fail 
> when they are fine.

  I'd think it also depends on the test.  It could be nice in some regard to 
have 2 cycles - one where each test is run fairly isolated, and another where it 
does all the same tests consecutively on the same server (to better test for 
possible cases of structure corruption or something).  It would only run this 
consecutive test if all the individual tests passed.

  That said, any test that results in a coredump is either broken code or broken 
test, either which should be fixed.




More information about the crossfire mailing list