[crossfire] Protocol & compression.
Sebastian Andersson
bofh-lists-crossfire-dev at diegeekdie.com
Sun Mar 26 05:08:55 CST 2006
On Sat, Mar 25, 2006 at 10:02:56PM -0800, Mark Wedel wrote:
> There are a few likely differences which may or may not effect crossfire in
> different ways:
>
> 1) Some data crossfire sends is just not compressible. The PNG data comes to
> mind - trying to compress it is at best a waste of time, and at worse, increases
> amount of data being transmitted. The other problem related to this is you'll
I've found PNG data to be compressable with zlib. At least if many
images are sent one after another.
Perhaps there is a bug in the 1.6 server, but many of the image2
commands sent contained hundreds of identical octets in a row,
clearly compressable on their own.
I hacked together a test program, which is included, that analyses
strace output from the client. Please don't look at the code ;-).
Compile it with: g++ -Wall -O3 -lz comptest.c -o comptest
(only tested on linux with g++-4.0, might work elsewhere too with other
compilers :-).
I played a short while against a 1.6 server (running to the slave mines
and throwing some comets around me, then word of recalled me back home,
prayed and quit), while running the gcfclient with strace:
strace -o log -s 4096 -esocket,read gcfclient -nocache
(I used gcfclient 1.9.0, from the debian distribution).
I then postprecessed the log file, by removing everything before the
read(6, "\0#") and everything after read(6, ...) = 0
and removing all the lines that doesn't contain "read(6,".
(the comptest program should really do that, but currently it doesn't).
The comptest program will read that file, reconstruct the data from the
read statements and send them through zlib. Each time the client didn't
get anything to read (read returned EAGAIN), the test program flushes
the zlib stream. This is to simulate my suggestion of trying to pack
more than one command together into a "chunk".
The program will output a file like this:
READING: 107
WRITING: 786
CHUNK_TIME: 2.393000ms
CHUNK IN: 2637
CHUNK OUT: 786
CHUNK RATIO: 3.354962
CHUNK SAVED: 1851
where READING is for every time it reads something from the log file
(the number is the number of bytes).
WRITING when the zstream has some output to write.
CHUNK TIME, how long clock time that chunk took to process, not really reliable.
CHUNK IN, number of bytes to write before compression.
CHUNK OUT, number of bytes after compressiom.
CHUNK RATIO, in/out
CHUNK SAVED, in-out
At the end the totals are given:
./comptest < log | tail -4
Total in: 795KB
Total out: 310KB
Ratio: 2.561391
Saved: 485KB
This was with all default values, they are described later.
Changing the code to call:
setvbuf(stdout, NULL, _IONBF, 0);
and running it with ltrace -T -e deflate seems like a better way
to get a feeling for the overhead of zlib, but I've not done that more
than to visualy see some values:
0.065ms for compresing 2 bytes, became 8 bytes.
0.324 + 0.091 bytes for compressing 555 bytes, they became
became 318 bytes.
Around 0.4ms to compress 29763 bytes into 10000 bytes.
The program takes three optional arguments:
* compression output buffer size, default 1024, its a tradeoff between
per packet lag and TCP header overhead of course.
* compression level 1..9, default 5, higher compresses more.
* overhead per compression write, default 0
If overhead is given, then the system will add that overhead to each
compressed writing and it will not compress data that is less than
16 bytes long. This is not totaly correct of course, the real
compression overhead (both in bytes and in CPU time) would be larger
if one would flush after each command. Anyway, the result was that
with an overhead of just 4 bytes (ie two length bytes + "gz"),
the result was that 447KB was saved, instead of the 485KB when
everything was compressed (8% less). On the other hand, 15% less CPU
was used for the whole program, measured with time and just counting
the "user time" (~170ms vs ~200ms).
(The CPU used was an AMD Athlon(TM) XP 2400+; 2GHz, 256KB cache,
~4k bogomips on linux-2.6.15)
I did some other quick measurements with time over the whole file.
With compression level 9, the run time was twice as long,
but it only saved 5KB more.
With compression level 1, the run time was 10% less and the output
was 27KB larger (6% more) than at level 5.
> often need to send a bunch of image data at one time (player changes to new
> map), and now there is lots of data you are trying to compress - the time it
> takes to compress that much data could become significant.
So is receiving uncompressed data if you've got a slow link and need
compression. When measuring lag to modem users, they usualy have
less lag when compression is turned on because of the increased
bandwidth. With crossfire, the same might be experienced for 64-128kbps
connections.
/Sebastian
--
.oooO o,o Oooo. Ad: http://dum.acc.umu.se/
( ) \_/ ( ) (o_
"Life is not fair, but root \ ( /|\ ) / (o_ //\
password helps!" -- The BOFH \_) (_/ (/)_ V_/_
-------------- next part --------------
A non-text attachment was scrubbed...
Name: comptest.c
Type: text/x-csrc
Size: 7503 bytes
Desc: not available
Url : http://mailman.metalforge.org/pipermail/crossfire/attachments/20060326/2fd4c895/attachment-0001.c
More information about the crossfire
mailing list