> The idea of volume is that you may want to use the same sound file for > multiple > events, but vary the intensity. Ok, got a point there, didn't think of that :) > Can you provide a better example? Currently, when a sound is played, the > location of the sound (relative to the player) is sent to the client, so > the > client might know that the offset is -5, 5 to the player, and adjust > accordingly > in terms of volume as well as stereo effect. The current sound > interface only > presumes a 2 channel system - I suppose if someone cared, they could in > fact do > a 4 channel setup (left/right/front/back). I was thinking something like 'position, propagation direction'. This way, client could apply doppler effect, so player knows which way the spell is going. Ie hear it getting closer, or away, or in a different direction. Imagine hearing a big fireball coming towards you ^_^ > The sound stuff could then be dealt with much like the images are, > however, I'd personally say that a pure cache approach is used (at <snip snip> > something like 'you hear a fireball off to the west' (where fireball is > the printable name). That sounds like a nice idea. But that means we need a sound list (to match filename/name, duration) and link it to archetypes. > I personally don't think a push idea of sending the sound files > (unrequested) to the client is a good idea. The sound files are in most > cases much larger than images are (most images are just a couple K, many > of the sound files are in the 10-20K range). But in addition, I'd > expect the sound files to be much more static. EG, new sounds may get > added, but it is unlikely that someone will 'redo' the gong.raw file, so > your almost certainly pushing redundant data. And in fact, the sounds > should probably be included with the client. Agree there on pushing. But still we could find a way to let the client download from server. Maybe something like: client asks for sound file fireball.raw, bytes 2000 to 2999, and server sends that back? This way the client would have the responsibility of downloading (and adjusting speed, 100b or 1000b at a time, with a cap), server wouldn't care. But it'd require opening/closing the sound file often for the server. > This is why I mentioned the idea of the sound structure. In addition > to knowing that sound X is associated with an object, you have to know > how it is associated. Thus, you could have something like: > > sound_move walk(100) > sound_attack clang(80) > > and so on. I suppose if you want different sounds for the same event, > you could do them as a list, like perhaps: <snip snip> Now that'd be a great idea, to have different sounds for same effect. Though I'd rather see some archetype lines line sound_event apply name volume duration <and so on> instead of sound_apply name <...> Makes it easier to add new events without having to change parsing logic. In the same topic (but different thing), I'm thinking of tweaking the loading parser to be able to write 'type KEY' or 'type DOOR' instead of 'type <number you always foget>'. Nicolas _______________________________________________ crossfire-devel mailing list crossfire-devel at lists.real-time.com https://mailman.real-time.com/mailman/listinfo/crossfire-devel