[Crossfire-wiki] [Crossfire DokuWiki] page added: dev:media_updates
no-reply_wiki at metalforge.org
no-reply_wiki at metalforge.org
Tue Dec 9 02:21:51 CST 2008
A page in your DokuWiki was added or changed. Here are the details:
Date : 2008/12/09 02:21
User : mwedel
Edit Summary: created
The basic idea is that the client and server could be a lot smarter about getting updated images to the client. This may be related to dynamic content.
Currently, images are handled by sending a checksum for each image, and then client downloads those that are new. This works, but makes doing all this checking at start up too slow to be feasible, even on fast links. And for the most part, this isn't very efficient - images do not change very often.
Instead, I would see the server maintain a table like:
#base_Revision update_revision checksum URL
2353 2353 23896752987 http://foobar.com/img2353 ftp://abc.org/img2353
2353-local 2353-local 23723098638 http://foobar.com/img2353-local ftp://...
2353 2985 35489345842 http://foobar.com/img2353-2985 ftp://....
2985 3142 ... ...
2353 3142 ... ...
In that first line (2353/2353) that denotes the basic image package. Ideally, an updated client is distributed with that image set, so client wouldn't need to download it.
The second line is used to denote the local changes the server has. For example, the user may have downloaded the server, added a couple archetypes, and thus now has that local set.
The next line (2353/2985) is an update package. The server was updated to version 2985 of the archetypes at some point, and that is now the file to get images up to date at that point.
Following lines (2985/3142) denote that server has now been updated to version 3142. There are two updates available - one to go from the base to that version (2353/3142) and one to go from the last update to latest version.
The advantage here is that it makes it very quick for the client to know what image files it needs - it looks at what revision it has, and see what it needs to get to. The download may still take a little while, but downloading few big files is almost certainly faster than many small files. And by using URL, it is now possible to put those media files in fast locations.
I believe most of this could be done automatically - the collect script (or whatever) just has to keep track of what has changed - maybe the bmaps file now includes checksum values, and collect script reads that in so it can see what is different. There would also need to be different versions of the bmaps file stored away, but not a problem there. Ideally, rather than the bmaps always being written in alphabetical order, new images get added on the end. In this was, the name to number mapping also remains constant and doesn't need to get sent down every time. The part that would likely need to be manually done is putting the files on the ftp/web server. Probably needs to be some confirmation on this being pushed update. For example, I might be experimenting and running make collect a bunch of time with an image here and there - not until I'm finished do I want to bundle those into an updated revision.
For folks using official server releases with no added archetype, it means that it will know very quickly it has an up to date image archive.
I'd suggest that this be made the standard way, so all the messy caching of data on the client gets removed - at start, it will know it has all image data it needs at hand. And if the image to name numbers is also stabilized, it means that code goes away.
IP-Address : 209.204.178.229
Old Revision: none
New Revision: http://wiki.metalforge.net/doku.php/dev:media_updates
--
This mail was generated by DokuWiki at
http://wiki.metalforge.net/
More information about the crossfire-wiki
mailing list