[crossfire] new metaserver

Brendan Lally b.t.lally at warwick.ac.uk
Thu Jun 16 18:40:13 CDT 2005


On Thursday 16 June 2005 06:02, Mark Wedel wrote:

>
     
        Since server updates may be sporadic, presumably the metaservers won't
     
     >
     
      drop the listing for a server until some amount of time passes (30 minutes
     
     >
     
      or something).  Note also that the current metaserver tracks when it last
     
     >
     
      got an update from a server, and does provide that information to the
     
     >
     
      client (I haven't heard from this server in xyz seconds).
     
     
Yeah, that is another point, since the clients are breaking anyway, it is 
possible to prettify this output, and present it in human-readable form. (eg 
as 1h, 23m)

>
     
     
     >
     
      > For a well configured web server, something like mod_php or zend or
     
     >
     
      > similar will be running anyway, the scripts will be acting like compiled
     
     >
     
      > code in many respects. It will still be slower than a well written
     
     >
     
      > independent program, but then that is the price that is paid for having a
     
     >
     
      > web server handle all of the availability stuff.
     
     >
     
     
     >
     
        Right - I'm not sure the cost of doing it web server based vs independent
     
     >
     
      program.  For the number of crossfire servers we're talking about, probably
     
     >
     
      not a big deal in any case - although with it being web server based, you
     
     >
     
      do have to be concerned with things like file locking, which may not scale
     
     >
     
      will with large number of updates happening - this is mostly because it has
     
     >
     
      to do all updates through a file.  A standalone metaserver has the
     
     >
     
      advantage it only has to do some locking on the data structures, and only
     
     >
     
      periodically needs to write it out to a few (say every few minutes for
     
     >
     
      backup purposes).  The php one has to read/write the file every time in
     
     >
     
      gets an update.  As said, for the number of servers we have, may not be a
     
     >
     
      big dea.
     
     
Yes, I don't know how to get round this, although at least on a good 
webserver, if the files are accessed enough to matter, they shouldn't be 
leaving RAM.

>
     
      > By comparison, however the final system is implemented, the client /will/
     
     >
     
      > connect to a server and parse some information recieved from it. However
     
     >
     
      > that server is configured, libcurl can pretty much cope, so writing a
     
     >
     
      > fairly generic parser attached to libcurl is a nice base to begin from,
     
     >
     
      > should libcurl be disliked as a dependancy, then it is simply a matter of
     
     >
     
      > adding the appropriate socket code later.
     
     >
     
     
     >
     
        I never like adding new dependencies if it can be avoided.  As a data
     
     >
     
      point, my system did not have libcurl installed on to it.
     
     
sure, but for playing with prospective designs, libcurl is nice, since it is 
simply a case of changing a couple of lines to deal with a whole new 
protocol.

Unless the final form is quite a complex protocol (and I don't think it will 
be) then replacing libcurl with direct socket access is quite 
straightforward.

>
     
        In my ideal world, the metaservers should be able to provide information
     
     >
     
      in both 'pretty' html and something raw.  One could envision this by
     
     >
     
      something like:
     
     >
     
     
     >
     
     
      http://myhost/metaserver.php
      
      
     >
     
         giving nice html output, and something like:
     
     >
     
     
     >
     
     
      http://myhost/metaserver.php?output=raw
      
      
     >
     
     
     
or indeed, showservers.php and showserversclient.php, which were two of the 
first things that I wrote when I started to play with this. (since a 
webbrowser is easier to play with than some quickly crufted C code.)

I haven't done file locking there because I am yet to get round to it.

You could have a get call and merge the two I guess.

>
     
        providing real raw output (something like we have now).  I think however
     
     >
     
      the client would still have to toss the http headers, but that shouldn't be
     
     >
     
      too bad.
     
     
Well, it can check that it got a 200, and if not try somewhere else 
automatically.

>
     
      > watchdog when it is compiled in has the server send UDP packets to a
     
     >
     
      > local socket. AFAICT it doesn't really matter to much /what/ it sends, so
     
     >
     
      > it might as well send the data that the metaserver will use, in that case
     
     >
     
      > then the program you describe would end up looking similar to
     
     >
     
      > crossfire/utils/crossfire-loop.c (though maybe in perl?)
     
     >
     
     
     >
     
        IMO, the metaserver notification helper has to be started automatically
     
     >
     
      by the server, and not be an external entity started via script (like the
     
     >
     
      watchdog one uses).  This is simply easy of use - otherwise the problem is
     
     >
     
      that someone starts it by hand, or uses their own monitoring script and
     
     >
     
      don't send metaserver updates when they should.
     
     
True, but a fork and exec at startup can do that.

>
     
        Also, it is probably nice for the metaserver updater to be connected via
     
     >
     
      tcp or pipe to the server, so each can know when the other dies.  If the
     
     >
     
      helper sees the server dies, it can send a last update to the metaserver
     
     >
     
      informing it that the server just died, and then exit.  If the server sees
     
     >
     
      the helper dies for any reason, it can start up another copy.  The problem
     
     >
     
      with the udp notification and watchdog is that the watchdog could be dead
     
     >
     
      and server would never know.
     
     
This is true, although to a certain extent the idea is that the watchdog is 
simple enough to not die.

>
     
        The helper program also needs to have some basic security - making sure
     
     >
     
      the connection is coming from the local host and not something remote
     
     >
     
      (don't want some remote host connecting to the helper and forging
     
     >
     
      information).
     
     
Is it insecure at the moment? It /is/ already in the server code, this was why 
I suggested using it, if it is there presumably it at least works in some 
fashion (although this has been shown to be false in the past).

>
     
        The other nice bit about the helper and server talking via tcp is the
     
     >
     
      potential, perhaps in the future, for the helper to talk back to the server
     
     >
     
      with bits of information.  I'm not sure what info that would be, but would
     
     >
     
      still be nice to be able to do it.
     
     
Yes, Server intercommunication could then be done at the proper level, so that 
shouts could go between servers.

Then tell would act in a similar way to jabber.
eg, 

tell 
     
     leaf at crossfire.metalforge.net
     
      
or
tell 
     
     mikeeusa at cat2.dynu.ca
     
     

with a missing @ obviously implying a local tell.

possibly there could even be interserver postal service, and interserver 
querying of 'who' output (I don't know if that would be considered a good 
thing)

I'm inclined to say even interserver balance transfers, although that would 
allow lots of potential for abuse....

    
    


More information about the crossfire mailing list