Reply 420 of 965, by sergm
wrote:sergm, I see you're quite active these days on the emulator - that's great.
Can you please explain to a layman like me why you are developing so many different audio drivers (winmm, port, alsa, oss, pulse, qt) and what is the difference between them?
The shortest answer: I love this work 😀
Btw, you could also notice I ignore DSound and WASAPI. Though, the trues is we initially wanted only QAudioOutput as the only audio interface. However, this appeared to be the worst way for the task of realtime MIDI synth since audio timing (i.e. number of played samples at the moment) implemented in QAudio in ridiculous way. Moreover, as I recenty found, QAudio driver works on Win and Linux only. OSS isn't supported at all, and on Mac it's just silent (don't know the reason why).
So, we tried PortAudio. Much much better, but still not perfect. I even added some improvements in PortAudio itself. But it appeared to be easier to write my own driver for every audio platform we wanted to support than cure PortAudio. 😀 Thus, we have drivers for platforms which PortAudio performs worst on. Generally, these enable even more accurate audio timing and reduced latency.
Although, the hardest problem is that particular audio card drivers can affect the timing accuracy (personally encountered on Realtek codec) and this is fatal. 🙁 So, we have an option "Use advanced timing". If set, the synth tries to get audio timing from audio API, and every error affects rendering accuracy.
If this option is not set, we estimate audio timing on the basis of sample time elapsed for actual rendering, compute actual sample rate of audio output device (either hardware or software mixer of the OS / sound server), and use average values. This way allows perfect rendering (with errors about 0-2 samples) but has longer recovery period in case of x-run or high CPU loads. By default, this way is used to workaround possible timing issues caused by buggy audio drivers / mixers / servers, etc. But the user always has an option.