VOGONS


Reply 40 of 53, by Scali

User metadata
Rank l33t
Rank
l33t
NewRisingSun wrote:
I have repeated the procedure, this time […]
Show full quote

I have repeated the procedure, this time

  1. recording the 500Hz file instead of the piano piece
  2. recording at both of the speeds that the Tandy 1000 TX supports
  3. choosing the 40 column text mode for improved legibility on the composite output.

While the piano piece did not sound so bad, the sine wave exposes the choppiness of the output without mercy.

Thanks! So as James-F already mentioned, this confirms my expectations: a v1.x DSP does not work differently from v2.x or v3.x ones.
So I can use these routines without a problem for cards that only support single-cycle mode.
And the 'no busy wait' version seems to be the best one.
It assumes that when the card signals the interrupt at the end of the buffer, that the DSP is not in 'busy' mode, so it write the first command byte right away, instead of first reading back the status byte and checking the busy-flag. This saves a few precious cycles on very slow machines (eg 8088 4.77 MHz), to reduce the glitch to a minimum.

And it also shows that the 'DSP hack' is useless in practice, it glitches far more than a well-optimized interrupt handler.

NewRisingSun wrote:

Why would anyone think that this might happen? The DSP does not know the internal state of the DMA controller.

That's what I thought too. Under normal circumstances, the device doesn't know or care about the difference between single-transfer and auto-init modes on the DMA controller. But well, this is what they say on osdev: http://wiki.osdev.org/ISA_DMA

Some expansion cards do not support auto-init DMA such as Sound Blaster 1.x. These devices will crash if used with auto-init DMA. Sound Blaster 2.0 and later do support auto-init DMA.

I don't know where this 'wisdom' came from, but I didn't want to rule out the remote possibility that it is true until I had confirmation on actual hardware.
Well, your videos show that my program obviously does not crash your SB, and it only uses auto-init DMA.
I mean, there was a theoretical possibility that there was a bug in the DSP code, where it may have tried to fetch another byte after the DMA reached terminal count, which would then confuse the DSP firmware and crash.
But apparently there isn't. So osdev is wrong, they seem to confuse the 'auto-init' mode of DMA with the 'auto-init' command for the DSP.
If you would send the auto-init command to the DSP then it probably still would not crash, but it would be an unknown command for it, so it would simply never start playing the sample, and as a result, it would never trigger an interrupt either. Which, if your code relies on that interrupt occurring, could deadlock your program in some way, so you could say it 'crashed'.

But we have busted that myth now, osdev is simply wrong.

NewRisingSun wrote:

As it happens, setting the DMA controller into auto-init mode even when the DSP is programmed for single-cycle mode is a good workaround for yet another hardware error of the Sound Blaster 16 DSP, which occasionally requests one more byte than it should, causing sound effect dropouts in Wing Commander II, Wolfenstein 3-D and Jill of the Jungle.

Yes, anything newer than the SB Pro 2.0 seems quite bugged with single-transfer mode, and I suppose using auto-init is the best solution there.
That is also the approach I want to take: I use auto-init where possible, and the single-cycle routine here is merely a fallback for older DSPs.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 41 of 53, by NewRisingSun

User metadata
Rank Oldbie
Rank
Oldbie

Still, having to drive the DSP in auto-init mode makes a Sound Blaster driver unnecessarily complicated, as one needs to have a playback buffer in addition to whatever waveform is currently in memory.
When I looked at the data sheets of the chips on the AdLib Gold 1000, I immediately noticed how Creative Labs should have properly done it: just implement a small FIFO, say 16 bytes, and trigger the IRQ once all programmed bytes have been requested from the DMA, but with 16 bytes still left in the FIFO. That would allow seamless playback of samples directly from memory, without the need of a separate playback buffer. Of course, we know that Creative could not do anything right, but it's nice to think about what could have been.

Reply 42 of 53, by Scali

User metadata
Rank l33t
Rank
l33t

For regular sample playback, yes, the SB is somewhat overcomplicated (but only for samples larger than 64k of course... smaller samples can be played from a single DMA buffer). However, for any kind of synthesis, such as software mixing, you'd need a playback buffer anyway, so it's not an issue really.
Yes, the SB wasn't a very advanced or capable piece of hardware. Cutting corners and solving things on the cheap/with quick-and-dirty hacks seems to have driven its design. Which is what makes it fit so well with the PC platform if you ask me. The whole platform is lolwut.

What the AdLib Gold is doing is basically trying to work around the broken DMA controller design in the PC.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 43 of 53, by rasz_pl

User metadata
Rank Member
Rank
Member
NewRisingSun wrote:
Scali wrote:

Will a v1.xx DSP crash when the DMA controller is in auto-init mode?

Why would anyone think that this might happen?
...Wing Commander II

Sound Blaster: The Official Book wrote:

Note: If you are running Windows 3.1 without the DSP upgrade from 1.05 to 2.00, you will find that digital sound playback in Windows can waver and stumble. You may also find that some games, such as Wing Commander II, may lock up your system unexpectedly.

so last question remains, does Wing Commander II freeze on DSP 1.05? and if so why?

Reply 44 of 53, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Yes, the SB wasn't a very advanced or capable piece of hardware. Cutting corners and solving things on the cheap/with
quick-and-dirty hacks seems to have driven its design. Which is what makes it fit so well with the PC platform if you ask me.

I thought about the same. Both the IBM 5150 series and the Sound Blasters are somewhere stuck between 70s era technology and modern day technology.
One one hand they use archaic 74 series and GAL(!) TTL logic chips, on the other hand then-cutting-edge technology like
intelligent microcontrollers with built-in ROM code, such as 8042 (KBC) and 805x (SB DSP).
I think that makes the IBM PC in part such an adorable platform, even though it was intended as a soulless workhorse:
It constantly evolved, but at the same time retained its heritage. Speaking of IBM, the OS/2 team also got something
very impressive done. The seamless integration of its rivals DOS and Windows was akin to the philosophy of co-existence.
(Sure, IBM had tactical reasons for this, but the OS/2 devs did more than required, ie. they really cared about compatibility/integration.)

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 45 of 53, by SirNickity

User metadata
Rank Oldbie
Rank
Oldbie

I mean this endearingly, but the Sound Blaster is such a cobbled together POS... it's better than no sound card, but they did just the bare minimum to qualify. 😁 I've done some sound programming with ALSA and early Win API, and have dabbled in embedded hardware for funsies, but I've never worked with PC hardware at the ASM level before. So I started reading the hardware programming guide recently. I've always kind of wondered how PC sound cards typically handle sample rates. It seems most DACs out there in the real world are tied to a clock that is either a multiple of 44.1kHz or 48kHz. Odd sample rates, or for that matter anything below 32kHz, would have to be fed through a SRC or interpolated in software. But no. There's no low-jitter clock on a Sound Blaster, no sir. The playback rate is modified by a timer register on a microcontroller, and bit-banged to the DAC! WHUT. 🤣 If I calculated it all correctly, it's not even possible to play back at exactly 11025 or 22050 or 44100 Hz. 8000Hz, yes, but everything else is just "close enough" I guess. Was it really essential to be able to play samples at 10,989.01Hz or 13,513.51Hz? Do we really need 256 steps of similarly arbitrary sample rates? What a ghetto design...

Reply 46 of 53, by Scali

User metadata
Rank l33t
Rank
l33t
SirNickity wrote:

Do we really need 256 steps of similarly arbitrary sample rates?

Well, we did, in the 80s.
If you look at the Amiga's Paula chip, it works basically the same way (although it uses a 16-bit divider). In the case of the Paula, you basically get a 4-channel wavetable synth because it had 4 separate DACs, each with a programmable sample rate, so you could play samples at any pitch.

256 steps wouldn't quite be enough for that though. However, resampling everything to 44 kHz would be expensive, and also increase memory requirements.
I suppose it's only 256 steps because they happen to use an 8-bit register for the internal timing loop. But the idea of supporting multiple sample rates certainly makes sense. It allows the developer to freely choose between compact low-quality samples and large high-quality samples, and anything in between.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 47 of 53, by rasz_pl

User metadata
Rank Member
Rank
Member
SirNickity wrote:

What a ghetto design...

It was brilliant. They took a handful of off the shelf chips, sprinkled some custom firmware and took over whole lucrative market segment overnight with <$60 BOM >$300 SRP product.

Reply 48 of 53, by SirNickity

User metadata
Rank Oldbie
Rank
Oldbie

I get the bit-banged DAC thing. It's cheap and cheerful. It works. OK, fine. 😀

But, the stepped time constant thing doesn't make sense to me at all. I'm aware they were blazing a trail (and so there was no rule book per se), and I'm honestly not familiar enough with late-80s digital audio to know whether people were often using arbitrary sample rates. It seems to me like that would be perilous, because, while the SB might support a rate of X, some other card that used another arbitrary time constant divisor would support slightly different rates, and so there would be speed differences. Admittedly, probably inconsequential differences, and there are probably more important things to care about -- like 64K memory allocation barriers and such.

And, yes, the possibility of using the time constant as a variable-speed playback mechanism is interesting, but as you said -- too coarse to be useful in this case. Especially since you can't (I presume) even use the full range: A time constant value of 255, in mono, would be 1.0MHz. So that implies a max of 233, which is 43,478.26Hz in mono, and 21,739.13Hz in stereo. It appears this rate was chosen because of the microcontroller's 12MHz max clock, which is what they used. But say they had used a clock of 11.2896MHz -- that would have evenly divided into 11.025 / 22.050 / 44.1kHz. (Digikey shows this as an available off-the-shelf crystal today, though admittedly not as well-stocked as 12MHz. Not sure of its relative popularity or cost at the time.) I don't know if these standard rates were used in the original tech literature from before the SB Pro. Maybe compatibility with other digital audio standards had no relevance then? It's hard to say from the vantage point of +30 years, and given that the SB was a de facto standard, there's a good chance game audio was recorded on one, and expected to be played back on one, so there would be no speed error for most people.

Reply 49 of 53, by Scali

User metadata
Rank l33t
Rank
l33t
SirNickity wrote:

It seems to me like that would be perilous, because, while the SB might support a rate of X, some other card that used another arbitrary time constant divisor would support slightly different rates

What other card? 😀
SB quite literally set the standard.

SirNickity wrote:

Maybe compatibility with other digital audio standards had no relevance then?

Again, what standards?
44.1 kHz was the only common standard around, because of CD audio. But the first generations of SB couldn't handle that. Only SB 2.0 and SB Pro could (by which time, CPUs, memory and hard drives were faster and larger, so it was easier to use larger samples).
In practice I think most samples used in games and other software in the early days of SB were 8 kHz to 11 kHz mono 8-bit samples.

The Amiga tops out at 28 kHz by the way. Again a somewhat 'arbitrary' limit because of hardware limits (if you bypass DMA and bitbang the sound chip with the CPU, you can get it to 56 kHz).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 50 of 53, by cde

User metadata
Rank Member
Rank
Member

Having just tested with my CT2290, regardless wether L1 is disabled (Athlon XP), I get a perfect result with auto init and single cycle hack. With single cycle/single cycle I have some faint noise added, but nothing dramatically bad. I'm not hearing any loud pop or click sound when the sample loops.

EDIT: having tested the original CD version of DOTT with default buffer size of 8k, IRQ 5, DMA 1; with L1 disabled the game sounds completely fine. Maybe I'm just lucky, but on my particular setup this pops and clicks issue appears non existent.

Reply 51 of 53, by TheGreatCodeholio

User metadata
Rank Oldbie
Rank
Oldbie
NewRisingSun wrote on 2017-04-17, 19:24:

Still, having to drive the DSP in auto-init mode makes a Sound Blaster driver unnecessarily complicated, as one needs to have a playback buffer in addition to whatever waveform is currently in memory.
When I looked at the data sheets of the chips on the AdLib Gold 1000, I immediately noticed how Creative Labs should have properly done it: just implement a small FIFO, say 16 bytes, and trigger the IRQ once all programmed bytes have been requested from the DMA, but with 16 bytes still left in the FIFO. That would allow seamless playback of samples directly from memory, without the need of a separate playback buffer. Of course, we know that Creative could not do anything right, but it's nice to think about what could have been.

They didn't add a 16-sample FIFO until the Sound Blaster 16, and even then, only if you use the new DSP 4.xx playback/recording commands AND you set the bit to enable it.

DOSBox-X project: more emulation better accuracy.
DOSLIB and DOSLIB2: Learn how to tinker and hack hardware and software from DOS.

Reply 52 of 53, by TheGreatCodeholio

User metadata
Rank Oldbie
Rank
Oldbie
SirNickity wrote on 2019-02-04, 21:23:

I mean this endearingly, but the Sound Blaster is such a cobbled together POS... it's better than no sound card, but they did just the bare minimum to qualify. 😁 I've done some sound programming with ALSA and early Win API, and have dabbled in embedded hardware for funsies, but I've never worked with PC hardware at the ASM level before. So I started reading the hardware programming guide recently. I've always kind of wondered how PC sound cards typically handle sample rates. It seems most DACs out there in the real world are tied to a clock that is either a multiple of 44.1kHz or 48kHz. Odd sample rates, or for that matter anything below 32kHz, would have to be fed through a SRC or interpolated in software. But no. There's no low-jitter clock on a Sound Blaster, no sir. The playback rate is modified by a timer register on a microcontroller, and bit-banged to the DAC! WHUT. 🤣 If I calculated it all correctly, it's not even possible to play back at exactly 11025 or 22050 or 44100 Hz. 8000Hz, yes, but everything else is just "close enough" I guess. Was it really essential to be able to play samples at 10,989.01Hz or 13,513.51Hz? Do we really need 256 steps of similarly arbitrary sample rates? What a ghetto design...

No, but you can play at 11.111KHz, 22.222KHz, etc. up to 46KHz. That's the best you can do with 1MHz / (256 - time constant).

Clone cards are sometimes even less precise given the time constant, they'll round to whatever "close enough" they can do. The Sound Blaster 16 itself rounds to a multiple of some integer value, apparently.

And if your Sound Blaster 16 has an old enough DSP (4.5), there's even a bug in the sample rate handling that, if you graphed every sample rate it supports, shows "spikes" (sample rate calculation errors) at a regular interval along the x (sample rate) axis.

I wrote some tools in DOSLIB to probe the card and graph time constant vs actual sample rate and DMA transfer rate.

http://hackipedia.org/browse.cgi/Computer/Pla … 20documentation

Attachments

DOSBox-X project: more emulation better accuracy.
DOSLIB and DOSLIB2: Learn how to tinker and hack hardware and software from DOS.

Reply 53 of 53, by TheGreatCodeholio

User metadata
Rank Oldbie
Rank
Oldbie

By the way, SB clone chipsets like ESS and OPL3-SAx cards have bugs that might let you set a sample rate higher than what the original Sound Blaster can play. It even works, assuming the ISA bus can keep up.

On the laptops I tested ESS and OPL3-SAx, the card CAN keep up unless you do anything else on the system like read the hard disk.

You'll notice on ESS chipsets, if you use the ESS extended DSP commands you can drive it up to about 400KHz.

And on Creative Sound Blaster Live! PCI cards with the SB16 TSR enabled, you can push just past 160KHz before integer overflow errors cause the sample rate to drop to zero.

Anyway on real SB hardware the maximum sample rate appears to be 45454Hz, which is "close enough" to 44100Hz, right?

DOSBox-X project: more emulation better accuracy.
DOSLIB and DOSLIB2: Learn how to tinker and hack hardware and software from DOS.