VOGONS

Common searches


Nvidia VrWorks Audio

Topic actions

First post, by ZanQuance

User metadata
Rank Member
Rank
Member

VrAudio Works Demo

Since the announcement of VRWorks Audio the other day, I've been really excited waiting to hear a demo of it in action.
Full Aureal WaveTracing has finally returned! and backed by Nvidia yay!!!

I'm still resurrecting the Aureal cards regardless, most likely VrWorks will require some 980 or higher to run properly.
Now I just need to pickup one of those new 1080GTX's and a Vive Mmmmmmm

Reply 2 of 25, by DracoNihil

User metadata
Rank Oldbie
Rank
Oldbie
F2bnp wrote:

but knowing Nvidia they're gonna keep it all to themselves a.k.a. vendor exclusive.

Free and open source is love, free and open source is life. NVIDIA doesn't care though.

Seriously though, has anyone even tried to do anything with AMD TruAudio?

Are there any serious open source developments to hardware processed audio via GPGPU and other ASICS?

“I am the dragon without a name…”
― Κυνικός Δράκων

Reply 3 of 25, by Scali

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

This is cool and all, but knowing Nvidia they're gonna keep it all to themselves a.k.a. vendor exclusive. I don't see this getting adopted at all, sadly 🙁.

Perhaps it requires some nVidia-specific hardware?
I believe there's some special DSP-circuitry for TrueAudio in AMD GPUs. So it won't work on anything but these GPUs.

With a bit of luck, this will move in the same way as Direct3D/OpenCL have in the past:
First you get vendor-specific solutions (Glide, Cuda etc), and when the demand is there, it will be standardized into an API that can be run across a wider range of hardware.
DirectSound3D was once supposed to be an API for accelerated 3D audio, but for some reason it never really caught on/evolved, and various games started implementing their own stuff in software.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 4 of 25, by swaaye

User metadata
Rank l33t++
Rank
l33t++

OpenAL was supposedly open.

I heard Truaudio in that horrible Thief reboot. All they did was implement a reverb effect.

I don't know what it would take for a fancy 3D audio API to take off but it seems clear that it is an uphill battle that not many care about.

Reply 5 of 25, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

Agreed, it's great that audio is getting some attention. I miss how A3D and EAX (under Windowx XP) sounded over headphones.

YouTube, Facebook, Website

Reply 7 of 25, by DracoNihil

User metadata
Rank Oldbie
Rank
Oldbie

I know there are some demos in the scene that use GPGPU code to mix sound\music. Of course hardware compatibility is a massive issue there, I think also in these cases in order to even hear anything you have to be connected through HDMI or DisplayPort or have the audio signals routed away to your main speaker setup.

OpenAL has a open source implementation based on the very bare bones Loki Software thing. That whole project is called "OpenALSoft", the whole hardware mixed audio via ALSA was a long abandoned idea.

“I am the dragon without a name…”
― Κυνικός Δράκων

Reply 8 of 25, by Scali

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

OpenAL was supposedly open.

I think it's owned by Creative now.

swaaye wrote:

I heard Truaudio in that horrible Thief reboot. All they did was implement a reverb effect.

Yes, I'm not sure what TrueAudio does, or is supposed to do exactly, but I watched some examples, and didn't really feel impressed.
I think it was a great touch of nVidia to have the narrator's voice be 'cast' into the 3d world in realtime. It really got the point across on how the 3d audio works in their system. It suffers from the same 'over-processing' that we had in early programmable shader demos though: "Let's make everything super-shiny and bling-bling!". In this case the reverberation seemed rather over-the-top. Not very realistic, but it gets the point across. I'm sure it can be toned down a notch to make it realistic.

At any rate, this is the first time I hear of an API that gives you path-traced audio.
I believe the earlier systems (EAX, OpenAL, DirectSound3D, TrueAudio?) work in a much simpler way. Only A3D is somewhat similar (albeit probably far more simplified, since we didn't have such powerful hardware at the time), in that it takes the actual 3d geometry, rather than just having some 'hardwired' reverb here and some doppler there.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 9 of 25, by DracoNihil

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Yes, I'm not sure what TrueAudio does, or is supposed to do exactly

Google searching shows these top results:
http://www.amd.com/en-us/innovations/software … ogies/trueaudio
https://en.wikipedia.org/wiki/AMD_TrueAudio
http://www.anandtech.com/show/7868/evaluating … nd-mantle-thief
http://www.anandtech.com/show/7400/the-radeon … feat-asus-xfx/4

There's also this tidbit:
https://www.reddit.com/r/Amd/comments/3xiix1/ … _to_true_audio/

I guess take what you read with a grain of salt.

“I am the dragon without a name…”
― Κυνικός Δράκων

Reply 10 of 25, by ZanQuance

User metadata
Rank Member
Rank
Member

It's comments like this that Jerry Mahabub was after getting people to parrot for him.

BRTF is just measured ITD values, it does NOT replace the HRTF model as he would like people to believe. He's just a snake oil salesman trying to "revolutionize" the industry as he has stated multiple times in PR and his demo videos. He's even made crackpot claims that he revolutionized all this back in 2004...His oldest patent is of 2009 and his demo video recorded in 2004 not uploaded till 2011. Each one of his patents basically describe using HRTF anyways...

So many red flags!

TrueAudio runs on Tensillica HiFi EP DSP's and is just like every other audio DSP that predated it. It basically does what EAX did with reverb and requires the developers to bake the settings into the games. It provides a generic HRTF set which sounds fine in general, but it doesn't operate per voice, instead it works like SBX and interpolates between the available 7.1 channels and in my humble opinion, does a worse job than SBX.

HRTF is "required" to perceive Binaural audio across headphones, whether this is added during post production effects or during a recording with Binaural mic/ears setups.
Our brains get the HRTF filter information from our ear shape along with time delays of the audio reaching each ear, this together provide our brains with the audio cues we've developed to recognize where audio is positioned in 3D space.

BRTF tries to pretend it has magically done away with the need for HRTF as it now provides audio the way our "brain perceives" sound rather than our ears, thus not needing individualized HRTF measurements per individual, a "one size" fits all model.
There are so many issues I have with that blanket statement that I don't know where to begin besides just flinging "crackpot" and "snakeoils man" at him.

(Also Mageoftheyear has no idea what he's talking about with A3D having a 45degree issue, it does not)

Reply 11 of 25, by Scali

User metadata
Rank l33t
Rank
l33t

As I understand it, TrueAudio is basically a low-level programming interface for the DSPs in certain AMD GPUs. It isn't a complete implementation of 3D sound tracing/modeling, which A3D and and VrWorks Audio are. So, TrueAudio is more of a hardware-abstraction API, where the others are more like middleware.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 12 of 25, by ZanQuance

User metadata
Rank Member
Rank
Member

TrueAudio is Middleware, but since they leave it to the game designers to implement the good audio features, we've all seen the resulting history of this. Minimal effort put into audio time and time again. Last I checked you needed to use the TrueAudio plugin in wWise to implement it. I would take a look at the API but it's NDA only BLEH!!! So I can only surmise from the info they provide and the resulting games and demos that use it.

EAX and A3D 2.0 both required proprietary hardware to function properly, Sensaura was probably the most versatile of them all being a software solution, but made use of certain DSP's like the CS4630 when available to offload work.

TrueAudio is in the position to accelerate any API if they would code it to do so (which would be a really awesome feature), but to only offer some convolution reverb and an averaged HRTF set really hurts it. Nvidia is taking a step in the right direction and hopefully it can fall back to limited sources CPU rendering if the GPU is not up to par.

Reply 13 of 25, by Scali

User metadata
Rank l33t
Rank
l33t
ZanQuance wrote:

TrueAudio is Middleware, but since they leave it to the game designers to implement the good audio features, we've all seen the resulting history of this.

I wouldn't call that middleware.
When I think of middleware, all the good features are already implemented, and you just need to create content.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 14 of 25, by ZanQuance

User metadata
Rank Member
Rank
Member

It is middleware in that it's the API which you include with your projects and make function calls to in order to setup the audio pipelines. It's up to the designers to make use of those features and implement the audio chain in their code.

Sensaura and A3D needed a bit of coding work to be used properly, but once they are setup it's easy to obtain the desired results. EAX with EAGLE was click click click tag click click click done...

Reply 15 of 25, by Scali

User metadata
Rank l33t
Rank
l33t
ZanQuance wrote:

It is middleware in that it's the API which you include with your projects and make function calls to in order to setup the audio pipelines. It's up to the designers to make use of those features and implement the audio chain in their code.

Well, that's the thing. Not all APIs are middleware. DirectX is not middleware either, it's a hardware abstraction layer. You have to build the middleware on top of that (eg Unity, Unreal Engine, FrostBite etc) And it seems that TrueAudio is the same, there's no 'engine' that handles geometry, materials etc for you. You still have to do that yourself. And that is actually the hard part, whcich most developers can't do (it reminds me of physics... AMD may offer OpenCL, but no physics middleware. nVidia delivers PhysX, which again is actual middleware).
VrWrosk Audio seems to be an 'engine', so it is what I consider 'middleware'.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 16 of 25, by ZanQuance

User metadata
Rank Member
Rank
Member

Supposedly they have that also with their wWise plugin. I've gotten most my information from articles like this one.

If I can't check it out myself I cannot say for certain what it is and how it's used. From what I've read in the archives Sensaura was also considered middleware, and A3D as an API was not, even though the usage and development between them was the same.

Reply 18 of 25, by Scali

User metadata
Rank l33t
Rank
l33t
mirh wrote:

Surprise surprise, even AMD is going down the GPGPU ray-traced audio road.
DSP has been dropped and the thing is soon™ going to be open sourced in LiquidVR sdk.

Hum....

Graphics jobs tend to be long running, millions of pixels per frame mean a lot of calculations, which are relatively latency insensitive as long as they get done before the frame is needed by the monitor. Compute tasks, especially sound and VR/positioning, are significantly more latency sensitive.

That sounds a bit lopsided to be honest.
A 'graphics job' in this context seems to be like "Render a character" or even "Render a scene".
Yes, in that sense, they are long running, millions of pixels, lots of calculations.
*However*, we are talking about graphics and compute *tasks* here, where a single task equals a single graphics call, and there can be thousands of such calls per scene, or even per character (that's the whole point of DX12/Vulkan/Mantle: minimize overhead per graphics call, so you can issue far more draw calls per frame than with previous APIs).
And then the whole story inverts: GPGPU tasks you generally do on a per-frame level, eg "Solve AI" or "Process audio". This is quite different from graphics tasks, since you generally can just pack all your input data into arrays, so that a single GPGPU task could process AI for all characters at once, or process all audio for a frame at once, by simply looping over the input arrays until everything has been processed (this is what you want anyway, since setting up a compute task has considerable overhead, much like draw calls, only worse. So you want to batch your data as much as possible, minimize amount of calls/tasks, and maximize the amount of work done per task).

Which means the compute tasks are actually the long-running ones. And that means you can pretty much discard all the pretty diagrams and rhetoric there. None of it applies.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 19 of 25, by mirh

User metadata
Rank Member
Rank
Member

In all this talking, I saw anything but a single quantification of the latency involved.
Yes, it will be more burdensome. And then? What's the alternative, aside of the good-old quality-power mouse-cat game?

Putting them on the allegedly more inefficient CPU? Yet other fixed function dedicated hardware?

pcgamingwiki.com