On 10-Sep-2002 Mark Wedel wrote: > I'm not certain that your forecast is correct. Crossfire has been around > for 10 years or something. Given that timeframe, the object structure has > increased, but isn't a nightmare now, and I doubt would be even if the > current overloading of some values was removed. I'd have to dissagree. Much of the early life of crossfire was adding global behavior. In the last three years where there's been more unique type of behavior is where you've seen the proliferation of rampant overloading. If we were to take it out now, we'd add at least 50 or 60 values, possibly more. As we get more and more interesting code, I think we'll see the object.h get more and more bloated. > It should be mentioned what I'm mostly against is using fields of the > object structure to enforce/use some feature which is completely non > obvious. I don't for example have a problem with 'sp' being used to denote > the spell to cast for many objects - sp can at least look like spell. I'd suggest that this is even more dangerous. We use variables as mnemonics. Having code that uses a variable for two different (but similarly spelled) concepts where the mnemonics are the same is harder to debug than when it's a clear exception. > I'm presuming by the above, access would be like: > > op->int[WAND_NUM_CHARGES] > op->flag[BUTOTN_ACTIVE_ON_PUSH_ONLY] > op->ptr[WEAPON_SLAYING] > > or the like? Correct. We make it clear that it's type-specific by the name and perhaps with macros like: #define GET_TYPE_SPECIFIC_INT(op,x) op->type_ints[x] > That works. Does make debugging more difficult - right now I find it a > pain simply to debug the flag values, because I have to do the translation by > looking at define.h (eg, value 10 in the FLAG tables is ..., so it means > this object has that set). > If this happens for a lot of fields, looking at the object structure would > be pretty difficult. Adding more flag bits will only ensure that we have to look at define.h more. And, if you ask me, making define.h the canonical location for all type-specific information is a very good thing. It took me two days of tracing around to figure out how the dragon stuff worked, because nowhere does it say that 'exp' in a dragon ability force means that's what my dragon's metabolism is attuned to. Remember, my suggestion is only to put stuff that is meaningfull for small numbers of types into the type-specific storage. This way, if I'm debugging behavior that is type-specific, I look at define.h and see what all type vars are used for that type, and go from there. Right now, I have to trace through tons of code to figure out what any particular usage of 'exp' or 'sp' means. > For the other ones, it was done like: > > union { > Player *player; > Stationary_Caster *caster; > Monster *monster; > } cst; > > And extend as needed. At least in that way, you could look at the type, > see what type of object it was, and look at the appropriate field in the > union. This works. If we go this route, I'd suggest a union of structs, with one struct per type that needs storage. This will make it a little more readable and make it more clear that this is type-specific behavior. > However, this approach required a bit more work. Some obvious problem > areas in trying to fix this up are fields at are currently overloaded and > saved that way. > > For example, you could argue that fixing up exits so that destination > doesn't use slaying and hp and sp. But in my system, very hard to do that - > if you get the slaying field, you would need to know the object type to > store it in the right place. Well, not necesarily. In the case of pointers, it would make freeing up the memory harder, because we'd need all sorts of type-specific code in there. My way, we'd only have to loop through the pointer list and free() up anything that's not null. > You say you wouldn't worry about that, which certainly makes it easier. I wouldn't worry about having loader.l be type aware. No context sensitivity on a type-by-type basis. For example, an exit in my scheme might have look like this in a map file: arch somesortofexit x 3 y 6 destination_x 12 destination_y 4 destination_map /some/map end A magic mouth might be this: arch magicmouth x 3 y 5 activate_on_push 1 end some loader.l code that might go along with some of the type-specific values could be: # The following are type-specific values. # These values all decode to the first type-specific int: ^destination_x{S} SET_TYPE_SPECIFIC_INT(op, 0, IVAL); ^activate_on_push{S} SET_TYPE_SPECIFIC_INT(op, 0, IVAL); This is actually a bad example, because activate_on_push should be a flag, not an int, but you get the idea. If I had a sword like this: arch sword destination_x 3 end Then loader.l would happily set op->type_ints[0] = 3 in this case, which may or may not cause odd behavior. This is a weakness in the scheme. However, I think the net gain in overal readability would be so great, that these types of problems would be rare. And, when they show up, it would be not difficult to detect in the map files. Now, even though there would be no type-by-type contextual checking, my scheme makes it MUCH easier to do value-by-value checking. For example, I recently got into a long stream of server crashes when a corruption got into my dragon char. The reason was the 'exp' value in the dragon force. This is actually used as the indicator to what my dragon's metabolism is set at. It's used raw as an index to an array. Right now, loader.l has no idea that in a dragon force object, that the 'exp' shouldn't be greater than a certain value, because values of a few million are reasonable in many objects. But, if we used my scheme, we could add bounds checking on all type-specific variables. So, loader.l might be: # The following are type-specific values. # These values all decode to the first type-specific int: ^destination_x{S} SET_WITH_BOUNDS_CHECK(op, 0, IVAL, MIN_X, MAX_X, "destination_x"); ^activate_on_push{S} SET_WITH_BOUNDS_CHECK(op, 0, IVAL, 0, 1, "activate_on_push"); ^dragon_metabolism{S} SET_WITH_BOUNDS_CHECK(op, 0, IVAL, 0, 3, "dragon_metabolism"); In this case, we could end up with a log entry of: Type-specific variable 'dragon_metabolism' in object 'blah' is out of bounds in map '/city/buggymap' > But the other problem is knowing how objects deal with each other and > forseeing future needs. And you have cases where objects of different types > have similar functions, and it may not be clear how the objects should get > grouped together. Eg, you could have an object of type X, which has some > features of A, and some features of B. A and B are different enough not to > use > the same values. Do you put object X as yet another class? Do you try to > extend A or B for X? If it has it's own type, then put it into the type-specific storage. There will always be cases where it's not cut and dried, but I think this scheme will ned up being an overall net improvement. The code isn't complex, and the very nature of the code will enforce self-documentation. In fact, we could have a file that is gawk'ed over to create loader.l, adding in the type-specific variables as needed from a master file. This is elaborate, and not needed now, but in the future as we get larger numbers of this sort of thing, it would make things very readable. And, it's not so very hard to do. > While not a defense, the power of computers has grown much faster than > crossfires object or other growth. At current time, I'd much rather trade > ease of debugging and code readability for a few megabytes of ram and some > percentage of cpu impact. This is true. But, it's a fine line between that and some very innefficient code. We could (and sometime in the future we might) want run-time arbitrary atributes on objects, like MUSHes have. Where the attrib name and value are stored on each object. This, really, is the direction we're heading. And the above ideas might make it easier to switch over to something like this in the future. > In summary, I'm not sure what should be done. Eliminating overloaded > values is something that should be done. Yet, I don't want it to be harder > than it currently is to debug the code. I'm also not sure that redoing the > object structure following the current method will add that much bloat to the > object structure. Honestly, I think you're proabably right at this point. If we added a value in for every overloaded value, we might get away with under 256 added bytes per object. Saying that there are 50 overloaded variables is probably around the mark. For 20000 object, that's 5 extra magabytes. But, it's 5 meg we don't need to use. I think the above ideas would be a net reduction in debugging complexity, and it's easily extensible in the future with little size impact. It also starts getting us to use functions or function-like macros for storage and retrieval of type-specific data, which makes moving to dynamic attribute storage easier. Add to that how easy bounds checking is, and how little work it is to start implementing this, and it becomes a good way to ease out of overloading into something more readable and usable. Kurt.