Brendan Lally wrote: >> Right - I'm not sure the cost of doing it web server based vs independent >> program. For the number of crossfire servers we're talking about, probably >> not a big deal in any case - although with it being web server based, you >> do have to be concerned with things like file locking, which may not scale >> will with large number of updates happening - this is mostly because it has >> to do all updates through a file. A standalone metaserver has the >> advantage it only has to do some locking on the data structures, and only >> periodically needs to write it out to a few (say every few minutes for >> backup purposes). The php one has to read/write the file every time in >> gets an update. As said, for the number of servers we have, may not be a >> big dea. > > > Yes, I don't know how to get round this, although at least on a good > webserver, if the files are accessed enough to matter, they shouldn't be > leaving RAM. True, but that is harder to predict. The OS will do the file caching, but if the webserver is busy enough, the cache hit rate might be poor. The flip side is that if the metaserver is used a lot, more likely that the file will be cached, and thus performance improves. But how the OS caches varies OS to OS. Linux is very aggressive on caching data, others less so. OTOH, if the core metaserver can be run as a non web program, I think there are still benefits from doing that. It's a little more work to write such a program, but probably not much. Related to this, how do the slave metaservers do updates? They can obviously only do anything when someone is requesting data from them. Do they look at their local cache modification time, and if it hasn't been updated in X minutes, go talk to the core metaserver to get up to date information? > >> In my ideal world, the metaservers should be able to provide information >> in both 'pretty' html and something raw. One could envision this by >> something like: >> >> http://myhost/metaserver.php >> giving nice html output, and something like: >> >> http://myhost/metaserver.php?output=raw >> > > > or indeed, showservers.php and showserversclient.php, which were two of the > first things that I wrote when I started to play with this. (since a > webbrowser is easier to play with than some quickly crufted C code.) > > I haven't done file locking there because I am yet to get round to it. > > You could have a get call and merge the two I guess. Probably not a big difference for them to be one program. Two seperate scripts work just fine. >>> watchdog when it is compiled in has the server send UDP packets to a >>> local socket. AFAICT it doesn't really matter to much /what/ it sends, so >>> it might as well send the data that the metaserver will use, in that case >>> then the program you describe would end up looking similar to >>> crossfire/utils/crossfire-loop.c (though maybe in perl?) >> >> IMO, the metaserver notification helper has to be started automatically >> by the server, and not be an external entity started via script (like the >> watchdog one uses). This is simply easy of use - otherwise the problem is >> that someone starts it by hand, or uses their own monitoring script and >> don't send metaserver updates when they should. > > > True, but a fork and exec at startup can do that. Right, but that then isn't really the watchdog program. >> Also, it is probably nice for the metaserver updater to be connected via >> tcp or pipe to the server, so each can know when the other dies. If the >> helper sees the server dies, it can send a last update to the metaserver >> informing it that the server just died, and then exit. If the server sees >> the helper dies for any reason, it can start up another copy. The problem >> with the udp notification and watchdog is that the watchdog could be dead >> and server would never know. > > > This is true, although to a certain extent the idea is that the watchdog is > simple enough to not die. Famous last words. But there are of course other possiblities beyond the watchdog having faults. Someone accidentally kills it. Something external to it causes it to fail (out of memory, disk error, whatever). And right now, if the watchdog does die, not a big deal - server continues to run, no one notices anything different. However, if the metaserver helper dies and is not noticed, then the server in question is no longer reported, which does cause problems I'd personally like things to be simpler - if a user can just run the main server, and everything works, that is much better them then having to restart the helper manually. > > >> The helper program also needs to have some basic security - making sure >> the connection is coming from the local host and not something remote >> (don't want some remote host connecting to the helper and forging >> information). > > > Is it insecure at the moment? It /is/ already in the server code, this was why > I suggested using it, if it is there presumably it at least works in some > fashion (although this has been shown to be false in the past). Right now, it appears insecure, in that a udp packet to port 13325 resets the watchdog timeout. The code does not appear to do any validation that the packet in fact came from the local host. the other problem is that it does use udp. While spoofing IP here probably isn't much a concern, it does mean that it is now very difficult for the server to know if that packet has been received, and thus if the helper is still alive. The other gotcha is that the main purpose of the watchdog program is to restart the server if it dies or is not responsive. The main perhaps of the metaserver helper program is to send data to the metaserver. I can envision cases where this helper could basically freeze up resolving IP addresses and trying to connect to them - in this case, this helper really can't be doing anything else, such as restarting the server if it crashes. IMO, much better for this to be a standalone program with a specific purpose. I otherwise fear if the watchdog program is modified, these short comings could make it not useful as a watchdog program anymore. > > >> The other nice bit about the helper and server talking via tcp is the >> potential, perhaps in the future, for the helper to talk back to the server >> with bits of information. I'm not sure what info that would be, but would >> still be nice to be able to do it. > > > Yes, Server intercommunication could then be done at the proper level, so that > shouts could go between servers. > > Then tell would act in a similar way to jabber. > eg, > > tell leaf at crossfire.metalforge.net > or > tell mikeeusa at cat2.dynu.ca > > with a missing @ obviously implying a local tell. > > possibly there could even be interserver postal service, and interserver > querying of 'who' output (I don't know if that would be considered a good > thing) > > I'm inclined to say even interserver balance transfers, although that would > allow lots of potential for abuse.... Well, I'll leave it open what data would go between servers. I'm a little reluctant to reinvent irc here - if we're going that route, lets just grab the ircd sources and use that. I say that because a potential fear I'd have about interserver shouts would be you could get a server with no DM on the moment and someone on that server shouting to another server and making things very annoying over there. That said, things like interstellar 'who' could be useful. If you have a friend that plays, but you're not always on the same server, being able to easily see where they are playing might be nice. But in any case, my main point was that by it being a 2 way communication, these things would in fact be posible. The metaserver helper program is really a bad name - what it really is is just a proxy (which could actually be in perl) - the main point of it being a proxy is so that timely blocking operations (dns looks) don't freeze the main server.