First post, by Malvineous
- Rank
- Oldbie
Hi all,
I'm just starting to look at the new .DRO Adlib capture format in the CVS version of DOSBox, but as I'm starting to figure out how it works, I'm beginning to wonder what the reason was behind its implementation.
As far as I can tell, it's a more optimised version of the original .dro format. It won't write out values if they have no effect, such as a game writing the same value to the same register, or writing to an unused register. It also compacts values so instead of writing out the actual registers used, you have to look up a code table to find out what the original registers were.
Now this is all well and good, but I'm beginning to wonder why it's necessary to have this level of optimisation in DOSBox itself. Okay, it probably won't slow things down that much, but it would seem to bloat DOSBox just a little, all for the benefit of a few kilobytes saved in capture files (which, if it's that important, you could use a standalone utility for - I wrote one myself years back to remove redundant data from .raw files.)
The main reason why I'm not so keen on this optimisation happening inside DOSBox is because people like me who write Adlib utilities will find it limits the discoveries that can be made using DOSBox. Speaking from my own experiences, often seeing all the irrelevant data written to the OPL ports is a great way to understand how a program or game is working internally - but once the new capture format optimises all that away, it'll be that much harder to work out why a program outputs what it does. Of course it goes without saying that because of the code table, there's no chance of opening up a new .dro file in a hex editor to examine the raw Adlib data - something I frequently do when trying to reverse engineer a music format.
Perhaps I'm missing some great reason why the optimisation is important - maybe someone involved with the new capture format could shed some light on this?