VOGONS


How do I feed mt32emu with data?

Topic actions

First post, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

Hello, and thanks (again!) for this project. It's been a while since I posted here. Been using Munt for a long time now to play some classic DOS games. By now I can't tell the difference anymore to the actual hardware. 😀

I decided to use the mt32emu library in one of my own projects. But I can't figure out what I'm supposed to send to the Synth object. I have created a MidiStreamParser subclass, and the virtuals are getting called correctly (handleShortMessage(), handleSysex(), etc.) when I call MidiStreamParserImpl::parseStream() on a buffer that contains the data of a *.mid file (that otherwise plays fine in the mt32emu GUI player.)

But... what do I do with those? When handleShortMessage() or handleSysex() are called, I forward the data to Synth::playMsg(message) and Synth::playSysex(). Basically:

void MidiStreamParserSub::handleShortMessage(const Bit32u message)
{
synth->playMsg(message);
}

void MyMidiStreamParserSub::handleSysex(const Bit8u stream[], const Bit32u length)
{
synth->playSysex(stream, length);
}

But what do I do with the data that is sent to handleSystemRealtimeMessage()? Currently I just ignore them and only silence comes out of the renderer. Nothing but 0.f samples:

float* buf = new float[len * 2];
synth->flushMIDIQueue();
synth->render(buf, len);

'buf' just contains 'len' samples of silence 😵

How do I feed Synth properly?

Reply 1 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

Hi, realnc.

Glad you're interesting in using the library. FYI, I'm on making the API a bit clearer (I hope) and easier to use, so you can try instead the c_interface mode. There are some comments which would be useful if you've already known how to use it 😀
Anyway, what you do seems right to me. You may stick with silence trying to play on MIDI channel 1 though 😉

Reply 2 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

Btw, the only supported System Realtime Message by the real devices is active sensing. It isn't implemented anywhere in the library. The library does not perform in real-time and has no timing info other than the client supplies with the timestamps of the MIDI messages. It looks useful for a MIDI driver and a client application to use it to immediately stop hanging notes, etc.

Reply 3 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

Another thing to note, MidiStreamParser is necessary to use if you have a plain MIDI stream on the input. But in many cases you have prepared MIDI events, so you may talk directly to the synth and do not bother.

Reply 4 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

Hmm, I think I got why you can't hear anything. As I understand it, you're sending all the MIDI messages from a MIDI file to the synth, then do Synth::flushMIDIQueue() and then render. By this, you're emulating a MIDI cable with lightning transfer speed, so all the MIDI events are played in the single MIDI tick.

A good example of MIDI->wave conversion you can see in the smf2wav source.

Reply 5 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

Hm, I thought the Synth would know how to time the events properly.

I looked into the smf2wav sources, but they're using libsmf 😒

Does the library do the timing, or do I have to do it manually by examining the MIDI events? (I have no idea about MIDI, so I'll have to learn that one first 😜)

Reply 6 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

You're entirely correct.
When programming something you indeed get familiarize yourself with the object to implement. And Synth indeed sets timestamps to the incoming MIDI events automatically if they are coming in real time. When you send a MIDI stream to the Synth, the events must be timestamped to ensure proper playback. Actually, a standard MIDI file is a recorded stream of _timestamped_ MIDI events (with some extensions). And invoking flushMIDIQueue() has a special meaning for all the enqueued MIDI events to immediately take effect. This is used when you seek (essentially fast change in the playback time) in the MIDI player.

Reply 7 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

Removing the flushMIDIQueue() call makes it produce sound. Just random notes though, they go on for 10 seconds or so and then silence.

I had hoped this to be as easy as libfluidsynth, haha 🤣 (You feed it the MIDI data from the file you want to play and it just... renders it. It recognizes the time-related data in the MIDI stream on its own, so I don't have to do anything other than just feed it MIDI data I read from files.)

I have no idea what I'm doing or where to look, but I guess I'll have to figure it out. 😁

Reply 8 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

Got it. You think MidiStreamParser would parse MIDI files he-he 😀 Nope, the MIDI file format is different. Libsmf may be used to extract the MIDI events with timestamps and feed them to the synth, as smf2wav does. Actually, any MIDI file parser should suit the need.
You can also rip the MidiParser from the Qt thing if you also use Qt.
Maybe, it's worth to extract the rendering logic from smf2wav and make another library that plays MIDI files 😉

Reply 9 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

What is the format of the MIDI data that MidiStreamParser parses? In other words, what exactly is it useful for?

If I parse a MIDI file manually, should I just send the MIDI events to the play() routines with appropriately calculated timestamps, or should I use MidiStreamParser to parse them first? The existence of MidiStreamParser is a bit confusing, as there's no indication as to what it's there for.

Reply 10 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

Wait, I think I get it now. You're not supposed to feed MIDI data directly to the play() routines. The parser is there to transform the MIDI data into bit32u data, and you feed those to the play routines.

Reply 11 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

No. Initially, the library provided just play methods only. While porting the synth application to other platforms, we found that it is in fact uncommon to get fully prepared MIDI messages from the MIDI driver. Instead, we get if not a plain byte stream, but at least fragmented SysEx messages (or packs of multiple ones). So, having this piece of code along with the library seemed to be useful.

The issue of parsing a MIDI file doesn't seem to be needed so widely, in contrast. We expect the library is most often used in the real-time.

Reply 12 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

I now tried libsmf to parse the MIDI file, and I'm able to get playback 😀

However, there's some severe glitches with many of my MIDI files. I tried the smf2wav on its own, and it also produces the exact same glitches. Weirdly, the GUI player works fine 😐

An example MIDI file that triggers this:

http://expirebox.com/download/c485b4b780fbf6b … 076cbe706e.html

The glitch should be obvious once you try smf2wav on the file. A huge amount of notes play at once at the beginning of the file (if you skip the starting silence.)

I think I tracked down the MIDI messages that cause this:

timestamp - message in hex
1674907 - 0x000032c1
2791997 - 0x00003dc3
557817 - 0x000070c4

They were sticking out since they appear at the very beginning, even though their timestamps indicate that they should be coming much later. In any event, if I filter those messages out and ignore them, then it doesn't have the note-salad at the beginning, but the instruments are wrong.

Reply 13 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie
realnc wrote:

Weirdly, the GUI player works fine

Why so? Libsmf is not used there. We looked at some other SMF tooling that is C++ based but ended up with an own implementation with the use of Qt.

Moreover, there are other weird things with libsmf, e.g. here. Just curious, whether you use the actual v.1.3 or that one patched v.1.2 from smf2wav.

Reply 14 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie
sergm wrote:

Moreover, there are other weird things with libsmf, e.g. here. Just curious, whether you use the actual v.1.3 or that one patched v.1.2 from smf2wav.

I used both the bundled one that comes with Munt, as well as the latest from upstream Git. Same issue. I'm not a MIDI guru so I can't tell whether the MIDI events coming from SMF are somehow wrong or not though and why Synth chokes on them.

Anyway, I'm re-implementing the Qt-based code without the Qt dependency (using Boost and SDL instead). Let's see how that works out.

Reply 15 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

I got it working, finally.

But, not a moment too soon, next problem came up. How to detect the end of a track. I see that the Qt app does some overly complicated stuff to find out when to stop rendering. Would it be possible to add a new member function:

bool Synth::queueIsEmpty() const;

This should return whether the queued MIDI events have all been rendered (in other words whether the internal queue is empty,) so it would be easy to tell when we can stop calling Synth::render():

if (synth->queueIsEmpty() and not synth->isActive()) {
// We're done!
}

I added the function and created a pull request on github.

Reply 16 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

I don't get the reasoning. You're sending timestamped MIDI messages to the synth. Guess what's the meaning of the timestamp of the last one you enqueued? So, having parsed a MIDI file, you already know it's length in time, and immediately know how many samples you need to get from the synth as the minimum. Again, in realtime rendering that function makes no sense either.

You correctly check wether the synth is active before you stop SMF->wave conversion. There is no other way to tell if the synth is silent or it has more non-zero samples to output. Moreover, if your MIDI file does not stop all the channels properly (which may happen with some MIDI captures that only set the volume to zero), the synth might never become inactive. So, there is quite some sophistics, especially in smf2wav, to determine when to stop.

Reply 17 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

Yes, but it's easier to just check if the Synth is done with the queue. Unless of course there's a reason to make it more complicated 😒 Convenience in the API should be a good thing? I mean, you're using Qt, you know already how helpful convenience functions can be compared to having to do every little thing manually 😀 User programs are going to implement the same code over and over again, while for the Synth it's as easy as offering the information whether it's done with the queue or not.

Reply 18 of 27, by sergm

User metadata
Rank Oldbie
Rank
Oldbie

Hmm, it's still unclear to me how it may help. I can only imagine a scenario when a whole MIDI file is parsed, sent to the synth (if the queue size allows), and then render is invoked by some little amount of samples, so there is some sense in checking the queue for being empty. In this case:
1) you need to ensure the queue is large enough to fit all the MIDI data;
2) you'll get the end point with a granularity corresponding to your rendering buffer size.
Though, I think this is not how it should work. As the conversion is performed off-line, there is no reason to make the rendering buffer too small. Instead, you would probably want to render everything in a single shot, so you need to find out how many samples to render at the start. Besides, the MIDI file parser immediately tells you when the file ends, so why not to use the info already available (that's about convenience heh)?

Reply 19 of 27, by realnc

User metadata
Rank Oldbie
Rank
Oldbie

I'm not sure what you mean. Here's how it looks:

  1. Parse the MIDI file and fill the source queue.
  2. Send MIDI events from the source queue to the synth until the synth's queue is full.
  3. Render N amount of samples (N depends on our audio buffer; usually 4kb, but can be quite smaller in a low-latency setup) and send them to the sound card.
  4. Repeat from 2. until both the source and synth queues are empty and the Synth is inactive.

But OK, I guess I can work with counting samples instead of the cleaner way of just asking the synth "are you done?" It looked like an obvious improvement to the API to me, but I guess that't just me then 😵