VOGONS


First post, by superfury

User metadata
Rank l33t++
Rank
l33t++

I'm considering adding support for AVI file (video+audio at the specified aspect ratio currently used, after scaling to get a fixed resolution(800x600 for VGA, custom for CGA, 768p, 1080p; depending on the current aspect ratio. Of course screen changes will add increasing numbers to the new recordings for simplicity)).

It's easy to buffer the rendered audio and single frames rendered(and resized to the current output resolution of the window). I just need to know what needs to be written to the file. I think simple uncompressed video&audio would be the easiest to implement(without needing any additional SDL or non-SDL libraries(which might or might not be cross-platform).

I'm currently thinking of a simple fopen(and fclose when finished) operation, after which frames and chunks of audio need to be written to it. Or do I need to buffer the video and audio seperately in a different temporary file(uncompressed frames and audio), to combine them into a single AVI file when finished? Or can the AVI file be createn on the fly(like .WAV file audio output)?

My current .WAV file output creation library:
https://bitbucket.org/superfury/unipcemu/src/ … ave.c?at=master

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 1 of 2, by vladstamate

User metadata
Rank Oldbie
Rank
Oldbie

Uncompressed stream is not really feasible. If you take video ONLY 800x600x4x60 will give you approximately 115Mb (at 60FPS). Even at 24fps (RGB only instead of RGBA) will be 32Mb per second. You definitely want some sort of on the fly compression. Especially once you add sound. There are plenty of free (open source and fully cross platform) video stream compression libraries out there you can use (of which the best if FFMPEG).

YouTube channel: https://www.youtube.com/channel/UC7HbC_nq8t1S9l7qGYL0mTA
Collection: http://www.digiloguemuseum.com/index.html
Emulator: https://sites.google.com/site/capex86/
Raytracer: https://sites.google.com/site/opaqueraytracer/

Reply 2 of 2, by Kisai

User metadata
Rank Member
Rank
Member
superfury wrote:
I'm considering adding support for AVI file (video+audio at the specified aspect ratio currently used, after scaling to get a fi […]
Show full quote

I'm considering adding support for AVI file (video+audio at the specified aspect ratio currently used, after scaling to get a fixed resolution(800x600 for VGA, custom for CGA, 768p, 1080p; depending on the current aspect ratio. Of course screen changes will add increasing numbers to the new recordings for simplicity)).

It's easy to buffer the rendered audio and single frames rendered(and resized to the current output resolution of the window). I just need to know what needs to be written to the file. I think simple uncompressed video&audio would be the easiest to implement(without needing any additional SDL or non-SDL libraries(which might or might not be cross-platform).

I'm currently thinking of a simple fopen(and fclose when finished) operation, after which frames and chunks of audio need to be written to it. Or do I need to buffer the video and audio seperately in a different temporary file(uncompressed frames and audio), to combine them into a single AVI file when finished? Or can the AVI file be createn on the fly(like .WAV file audio output)?

My current .WAV file output creation library:
https://bitbucket.org/superfury/unipcemu/src/ … ave.c?at=master

It's not really viable to do this. The AVI format can't really do the resolution changes on the fly, which would require having a fixed-resolution output. What you can do however is a MP4 container or a MPEG-2 TS container. The only advantage AVI has is that it operates at the frame level, where as mpeg operates at the time code level.

Either will let you do PCM audio if you want it to remain uncompressed. You always want to sync to audio. That said, the problem is really what video codec you wish to use. You can technically stuff a RGB video into a Mpeg container (eg h.264 at crf=0 or qp=0 in ffmpeg) but that only turns of quantization it doesn't turn off colorspace conversion. For that more effort is required to specify pixel formats.

Look at DOSBOX-X if you want to see a depreciated way of incorporating FFMPEG. There is an alternative AVI writer as well in DOSBOX-X's code, but it's a bit buggy and stops writing at 32-bit integer boundaries instead of creating a new OpenDML AVI. The mpeg writer in DOSBOX supports resolution switches because the mpeg container supports it.

If you do not close the file, the resulting avi will be unreadable. It is preferable to write every X frames, and buffer up to 2 minutes of video, but if it crashes, it will be unusable still. This is a problem with all versions of dosbox if dosbox is closed without giving it a chance to close the capture.

The reason DOSBOX's ZMBV codec works so well is that it delta-frames using zlib. It actually works super-well with no I-frames, but is practically unusable for editing if recompressed without iframes (key-frames.) I once made a few alternative versions using lzo and lzma and while I could squeeze a little bit more with lzma, it was incredibly slow, while lzo is faster, it doesn't save as much space. My suggestion here really is that you are going to get stuck no matter what with a single-threaded decoder or encoder at some point, so the preferred action is to lightly-compress the video at the original square-pixel resolution and set the display aspect ratio in the output so that the player knows how the video should be played back.

The fastest/most efficient video codec I was able to design before giving up was to use the ppmd compression in 7-zip on the frame data, but it wasn't significant enough to change from zlib. If you want to capture emulated video at display resolution, you actually need to use h.264/h.265 hardware support, which doesn't support lossless compression, which is the entire point, it favors speed over quality.

One more thing, most emulators have to be slowed down to get frame-perfect emulation for capture. If you don't make the emulator wait to draw/play the next frame/audio samples, then any avi output you generate will be so full of dropped frames that it will also not be usable.