VOGONS

Common searches


Sound card industry

Topic actions

Reply 20 of 48, by ZanQuance

User metadata
Rank Member
Rank
Member
Scali wrote:
640K!enough wrote:

See, with audio, even if you want to mix a gazillion channels in 24-bit 192 kHz resolution, a simple CPU can do that at < 1% CPU load. So, there really is no use case for the average usage pattern for sound chips

Mixing is simple, but the major issue is with developers and their Audio resource budgets, once you start applying convolution for reverb and HRTF on the sound sources the cpu usage quickly goes up past the allotted budget. This is why dedicated DSP's that offload the job are nice to have, but aren't a necessity today like they used to be.
Sound cards always had misinformation marketing behind them, especially from Creative. 16-bit 48kHz playback is all you ever need for playback, period, end of story.
For mixing studios the extra headroom with 24-bit is more than enough. Pro-Tools does everything with 32-bit mixing in software, so who needs a dedicated DSP for that anymore?

Gamers do, we want advanced audio effects rendered on cool hardware DSP's which push our audio senses to their limits 😁 It's not just about CPU usage or FPS anymore, it's about 3D audio features and quality 😁

Reply 21 of 48, by Scali

User metadata
Rank l33t
Rank
l33t
ZanQuance wrote:

Mixing is simple, but the major issue is with developers and their Audio resource budgets, once you start applying convolution for reverb and HRTF on the sound sources the cpu usage quickly goes up past the allotted budget. This is why dedicated DSP's that offload the job are nice to have, but aren't a necessity today like they used to be.

Well, that's my point... everyday use only involves mixing multiple channels basically.
Even so, I suppose GPGPU could go a long way if you want more advanced processing power.

ZanQuance wrote:

Gamers do, we want advanced audio effects rendered on cool hardware DSP's which push our audio senses to their limits 😁 It's not just about CPU usage or FPS anymore, it's about 3D audio features and quality 😁

I wonder... I've always seen 3D audio as a failed experiment of the late 90s/early 2000s... We had various solutions, but support in games and hardware was generally poor, and eventually games started to just include their own processing routines if they wanted 3D audio, and just stream everything out as LPCM streams, so any hardware goes.
I suppose it's similar to physics acceleration... games want 'some degree' of physics in games, but don't care about pushing the boundaries with special acceleration hardware and APIs.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 22 of 48, by 640K!enough

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Actually... no. See, with audio, even if you want to mix a gazillion channels in 24-bit 192 kHz resolution, a simple CPU can do […]
Show full quote

Actually... no.
See, with audio, even if you want to mix a gazillion channels in 24-bit 192 kHz resolution, a simple CPU can do that at < 1% CPU load. So, there really is no use case for the average usage pattern for sound chips.

With graphics, it's an entirely different story. Even if you take the most high-end CPU out there, and spend 100% of its power on graphics, you're not likely to get anywhere near the performance and quality of even 10-15 year old GPUs. Likewise, if you want to play 4k video, all but the most high-end CPUs would simply choke on the decoding task if they had no assistance from the GPU whatsoever.

That's why even those low-end integrated GPUs are still everywhere... even for people who don't want to game or anything, these GPUs are required for a decent everyday experience. It certainly WILL take a long time before there is CPU power enough... if ever (I've been hearing this for 10-15 years... look up SwiftShader for example, and the gap has only become larger).

I was mostly exaggerating to make a point. With graphics, it's always higher resolution, more effects, more cores, MORE speed, etc. Yet with audio, a few software-mixed PCM channels with basic software-rendered effects, played through noisy on-board audio is "good enough"?

Why don't gamers want realistic positional audio with reflections, properly modelled material density, etc. to match the detailed 3D environment? Beyond just mixing and effects, what about decent, dynamic music, synthesised in real-time with virtual-acoustic quality instruments. Surely, all of this would need hardware assistance in the same way that the graphical aspect does.

If we're willing to settle for basic audio, who needs 4k? Surely, software-rendered 1280x1024 is enough for everyone. 🤣

Reply 23 of 48, by Scali

User metadata
Rank l33t
Rank
l33t
640K!enough wrote:

I was mostly exaggerating to make a point. With graphics, it's always higher resolution, more effects, more cores, MORE speed, etc. Yet with audio, a few software-mixed PCM channels with basic software-rendered effects, played through noisy on-board audio is "good enough"?

I don't know... It's like I said with physics... apparently there's some level of 'good enough' that gamers will settle for.
Perhaps people are far more visually oriented than otherwise. Eye candy and realism sells, everything else doesn't, it seems.
TVs in 4k, with 3D options, Oculus Rift, HTC Vive etc VR goggles... it sells.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 24 of 48, by ZanQuance

User metadata
Rank Member
Rank
Member

I'm getting sick and tired of being logged out while I type a long reply...

A3D is it's own middleware and only needs a simple scene for wavetracing exported from the game geometry, but will use DS3D buffers. This is not required though and games can be programmed to use only the A3DAPI without DS3D and it will use it's own internal buffers for 3D audio.
EAX is a DS3D extension and doesn't do any 3D audio processing itself, so not fair to compare A3D with EAX as they are apples to oranges.
Sensaura is middleware but piggybacks on other API's for all-rounder support.
It is simple to add A3D to a game, and actually more work to add EAX support with EAGLE to games (tagging each room/scene and walls for materials) although you can do the same with A3D for air/water filters and wall material tagging.
VR needs real sounding audio, and so do games like Skyrim. Post processed SBX and Dolby surround are poor spatialization techniques and miss the properly rendered 3D audio mark.

Last edited by ZanQuance on 2018-04-17, 20:35. Edited 1 time in total.

Reply 25 of 48, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
640K!enough wrote:

If we're willing to settle for basic audio, who needs 4k? Surely, software-rendered 1280x1024 is enough for everyone. 🤣

I don't see games pushing more graphical realism nor gamers wanting it. They want and get the same hackish 3d but at higher resolutions. I don't need this for graphics and don't need it for sound, I'll take faithful global illumination at lower res and whatever the equivalent for sound.

Reply 26 of 48, by Plasma

User metadata
Rank Member
Rank
Member
640K!enough wrote:
I was mostly exaggerating to make a point. With graphics, it's always higher resolution, more effects, more cores, MORE speed, […]
Show full quote
Scali wrote:
Actually... no. See, with audio, even if you want to mix a gazillion channels in 24-bit 192 kHz resolution, a simple CPU can do […]
Show full quote

Actually... no.
See, with audio, even if you want to mix a gazillion channels in 24-bit 192 kHz resolution, a simple CPU can do that at < 1% CPU load. So, there really is no use case for the average usage pattern for sound chips.

With graphics, it's an entirely different story. Even if you take the most high-end CPU out there, and spend 100% of its power on graphics, you're not likely to get anywhere near the performance and quality of even 10-15 year old GPUs. Likewise, if you want to play 4k video, all but the most high-end CPUs would simply choke on the decoding task if they had no assistance from the GPU whatsoever.

That's why even those low-end integrated GPUs are still everywhere... even for people who don't want to game or anything, these GPUs are required for a decent everyday experience. It certainly WILL take a long time before there is CPU power enough... if ever (I've been hearing this for 10-15 years... look up SwiftShader for example, and the gap has only become larger).

I was mostly exaggerating to make a point. With graphics, it's always higher resolution, more effects, more cores, MORE speed, etc. Yet with audio, a few software-mixed PCM channels with basic software-rendered effects, played through noisy on-board audio is "good enough"?

Why don't gamers want realistic positional audio with reflections, properly modelled material density, etc. to match the detailed 3D environment? Beyond just mixing and effects, what about decent, dynamic music, synthesised in real-time with virtual-acoustic quality instruments. Surely, all of this would need hardware assistance in the same way that the graphical aspect does.

If we're willing to settle for basic audio, who needs 4k? Surely, software-rendered 1280x1024 is enough for everyone. 🤣

How many channels do you need for a pair of headphones? And on-board audio isn't noisy anymore with digital outputs.

Most of the "realistic" effects you list are better pre-recorded in a studio, with actual musicians, instruments, and materials. The real thing always sounds better than something fully computer-generated that can't quite make it across the uncanny valley. You also don't need dedicated hardware for dynamic music. Some games were doing dynamic music in the 90s.

We are reaching the point where graphics resolution is "good enough." If you have a 23" monitor, you don't need 4K. If you don't play games, integrated graphics are "good enough." You can't sell what people won't buy.

Reply 27 of 48, by ZanQuance

User metadata
Rank Member
Rank
Member

Dynamics of sound, proper filtering of Reflections/Diffractions, Reverb, Occlusions/Obstructions, Atmospheric filtering for different environments ect... cannot be properly precomputed and would require the game developers to tag all geometry with proper effects and hand tune everything. Something that is time consuming and not budget friendly. Real time computation of said effects is the only way forward. If however you meant that computed audio samples aren't the way to go, then I agree unless the physics tech catches up and you can no longer tell the acoustic difference between recorded and computed sounds.

Reply 28 of 48, by 640K!enough

User metadata
Rank Oldbie
Rank
Oldbie
ZanQuance wrote:

I'm getting sick and tired of being logged out while I type a long reply...

I have had the same problem. When I remember, I take the time to select the text and copy it to the clipboard before clicking Preview or Submit, so that I still have it if something goes wrong.

ZanQuance wrote:

VR needs real sounding audio, and so do games like Skyrim. Post processed SBX and Dolby surround are poor spatialization techniques and miss the properly rendered 3D audio mark.

What would be your ideal audio experience at the moment? Given your interest in the 8830, is A3D 2 your gold standard, or just a starting point?

Reply 29 of 48, by Plasma

User metadata
Rank Member
Rank
Member
ZanQuance wrote:

Dynamics of sound, proper filtering of Reflections/Diffractions, Reverb, Occlusions/Obstructions, Atmospheric filtering for different environments ect... cannot be properly precomputed and would require the game developers to tag all geometry with proper effects and hand tune everything. Something that is time consuming and not budget friendly. Real time computation of said effects is the only way forward. If however you meant that computed audio samples aren't the way to go, then I agree unless the physics tech catches up and you can no longer tell the acoustic difference between recorded and computed sounds.

I am talking about the idea of fully computer-generated audio samples/instruments. Things like occlusion and reflection/reverb can already be done in software. I'm not sure what you mean by "atmospheric filtering"...

Reply 30 of 48, by ZanQuance

User metadata
Rank Member
Rank
Member
640K!enough wrote:
ZanQuance wrote:

I'm getting sick and tired of being logged out while I type a long reply...

I have had the same problem. When I remember, I take the time to select the text and copy it to the clipboard before clicking Preview or Submit, so that I still have it if something goes wrong.

Yeah when I remember too...🙁

640K!enough wrote:

What would be your ideal audio experience at the moment? Given your interest in the 8830, is A3D 2 your gold standard, or just a starting point?

That's the real crux of our industry, there is no gold standard right now and I would vote Sensaura as being the most robust and mature 3D audio solution to date, it was widely used on game consoles and the X-Box original games made pretty great use of it. For PC though it's all hit and miss depending on the game...I would vote Descent 3 as being the best A3D 2.0 title, it makes use of all A3D 2.0 features.

Plasma wrote:

I am talking about the idea of fully computer-generated audio samples/instruments. Things like occlusion and reflection/reverb can already be done in software. I'm not sure what you mean by "atmospheric filtering"...

atmospheric filtering is what A3D applies for underwater and fog effects, it's filtering for AIR audio effects.

Reply 31 of 48, by 640K!enough

User metadata
Rank Oldbie
Rank
Oldbie
ZanQuance wrote:

That's the real crux of our industry, there is no gold standard right now and I would vote Sensaura as being the most robust and mature 3D audio solution to date, it was widely used on game consoles and the X-Box original games made pretty great use of it. For PC though it's all hit and miss depending on the game...I would vote Descent 3 as being the best A3D 2.0 title, it makes use of all A3D 2.0 features.

If you could stand on your soapbox and issue non-negotiable orders to (say) Creative about what hardware to develop, Microsoft about what APIs to offer and game developers instructing them how to use them, what would you mandate?

Reply 32 of 48, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie

Is it that hard to implement this kinda of thing (of course you need industry backing and support... surely the really hard part). The geometry is already there (if pushing pixels) and so all the needs to be added (from the content side of things) are materials BxDF's etc for the medium you want to model. I take it audio simulation would be akin to volumetric physically based tracing (modelling wavelengths and their propagation through mediums, outside visible spectrum IR/UV, rather than RGB colour spaces).

Like others say though, I really don't think there is a market for it unfortunately, and requires industry adoption (a lot more effort from content creators and producers perhaps) for very little end gain for the user. Most of our brain cycles go into interpreting vision, so makes sense to spend most effort (during runtime) and optimise those areas. It's not like this hasn't been tried before and failed, perhaps because 99% of gamers aren't that bothered? While they wouldn't be happy, a lot more gamers can play games without sound... I'm not sure the same could be said for those attempting to play a game without visual and audio only... even if it is as spatially realistic as possible.

I can see it now... multiple sound sources.... deferred styley.

Shame there isn't more binaural as standard though.... that would make me happy. I have had some radical borderline schizophrenia episodes with AKG K340 (electret AND dynamic) headphones... proper head turning gut reactions like someone is right there talking to me o.0 ...yeah what he said.

Reply 33 of 48, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

Is it that hard to implement this kinda of thing (of course you need industry backing and support... surely the really hard part). The geometry is already there (if pushing pixels) and so all the needs to be added (from the content side of things) are materials BxDF's etc for the medium you want to model. I take it audio simulation would be akin to volumetric physically based tracing (modelling wavelengths and their propagation through mediums, outside visible spectrum IR/UV, rather than RGB colour spaces).

I suppose the problem, like with volumetric tracing in graphics, physics and such, is that 'the geometry' is not what you want.
For visual graphics, you want very detailed meshes. For volumetric tracing and physics however, these meshes are far too detailed for efficient use. So instead, you want to have 'alternative' geometry optimized for the specific task you want to perform.
I think that is where the problem is. Many games mainly have 'pre-baked' volumetric lighting, not the real thing.

With volumetric tracing however, DX12 has added conservative rasterization (which only NV and Intel supported until recently), which allows you to efficiently use the GPU to convert detailed meshes to volumetric data in realtime. And the next update to DX12 will also add an extra raytracing feature.
So perhaps we can finally get more advanced volumetric lighting... and the same technology may also be applicable to audio processing.
But you'll need a GPU to do it.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 34 of 48, by mirh

User metadata
Rank Member
Rank
Member
Scali wrote:

And the next update to DX12 will also add an extra raytracing feature.
So perhaps we can finally get more advanced volumetric lighting... and the same technology may also be applicable to audio processing.
But you'll need a GPU to do it.

That's what AMD (and nvidia maybe? no info or docs on that) is already doing.
Nvidia VrWorks Audio

p.s. where did Crystal River Engineering ended up then? Is this is last remnant?

Last edited by mirh on 2018-04-19, 19:28. Edited 1 time in total.

pcgamingwiki.com

Reply 35 of 48, by Scali

User metadata
Rank l33t
Rank
l33t
mirh wrote:

That's what AMD (and nvidia maybe? no info or docs on that) is already doing.
Nvidia VrWorks Audio

What TrueAudio Next Is NOT
TrueAudio Next currently does not perform ray tracing for sound.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 36 of 48, by mirh

User metadata
Rank Member
Rank
Member
Scali wrote:

TrueAudio Next currently does not perform ray tracing for sound.

That doesn't exactly mean what you are thinking (and it's some kind of cherry-picking)

Steam® Audio models indirect sound by calculating an IR using ray tracing. This IR is then used in a convolution reverb effect to add indirect sound effects to either individual sound sources (in the case of source-centric convolution reverb) or a submix of sound reaching the listener (in the case of listener-centric convolution reverb).

Also anyway, that just refers to Steam Audio. TAN itself can actually be accelerated via FireRays library.

And you can read more on that here.
There's a 7 pages in-depth paper about it, if you are really interested.

pcgamingwiki.com

Reply 37 of 48, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie

Another issue with the realism of sound and its impracticality with gaming is perhaps the fact that sound is a lot slower than light. Light is so fast that for most purposes we consider is instantaneous, sound however is not... this means in an open space, there can be a noticable lag in the observer hearing the sound after seeing an explosion... in water sound travels ~4 times faster than in air... all these little naunces, I can't see being advatageous when using physical based sound modelling. In fact it could be argued that it is a hinderance (esepcially when reaction times are so important in many games)?

I say games, because its the only industry that is going to warrant enough widespread support ime. VR perhaps, CAD not big enough, music producers... doubt it but idk?

Reply 38 of 48, by Gahhhrrrlic

User metadata
Rank Member
Rank
Member
ZanQuance wrote:

It is simple to add A3D to a game, and actually more work to add EAX support with EAGLE to games (tagging each room/scene and walls for materials) although you can do the same with A3D for air/water filters and wall material tagging.

Just so I'm certain I understand you correctly, are you suggesting that this can be done without the source code for the game (ie, support is in some sense modular?) or you're saying that if you had the source code, it would be easy to add. What's involved? I don't mean to put you on the spot for a grueling answer, just to understand what the future holds for A3D/Sensaura, if any. It'd be nice to be able to add support to modern games. I built a modern rig with PCI slots specifically for this sort of backward compatibility. I even maintain XP32 in a dual boot for the same reason.

https://hubpages.com/technology/How-to-Maximi … -Retro-Computer

Reply 39 of 48, by Azarien

User metadata
Rank Oldbie
Rank
Oldbie
spiroyster wrote:

in an open space, there can be a noticable lag in the observer hearing the sound after seeing an explosion... in water sound travels ~4 times faster than in air... all these little naunces, I can't see being advatageous when using physical based sound modelling. In fact it could be argued that it is a hinderance (esepcially when reaction times are so important in many games)?

One could blame the reality, but it'd be better to teach people that this is how sound works.