VOGONS


Reply 40 of 51, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie

There are two major game engine developers left in the world.
There are three major game console manufacturers left.
There's one developer of SoC for these consoles left.
And Microsoft, which only recognizes its own technologies.

And then Creative Lab comes to life: "I'll go to Kickstarter and make a new sound card."
PS. Although, to be honest, Creative isn't claiming it will be a PC gaming sound card.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 41 of 51, by darry

User metadata
Rank l33t++
Rank
l33t++
shevalier wrote on Today, 05:30:
There are two major game engine developers left in the world. There are three major game console manufacturers left. There's one […]
Show full quote

There are two major game engine developers left in the world.
There are three major game console manufacturers left.
There's one developer of SoC for these consoles left.
And Microsoft, which only recognizes its own technologies.

And then Creative Lab comes to life: "I'll go to Kickstarter and make a new sound card."
PS. Although, to be honest, Creative isn't claiming it will be a PC gaming sound card.

I don't feel Creative has a clear idea of what it wants to make at this point, only that they don't want to front the R&D costs and that they would really need it to be very successful.

Reply 42 of 51, by lepidotós

User metadata
Rank Member
Rank
Member
darry wrote on Today, 05:39:
shevalier wrote on Today, 05:30:
There are two major game engine developers left in the world. There are three major game console manufacturers left. There's one […]
Show full quote

There are two major game engine developers left in the world.
There are three major game console manufacturers left.
There's one developer of SoC for these consoles left.
And Microsoft, which only recognizes its own technologies.

And then Creative Lab comes to life: "I'll go to Kickstarter and make a new sound card."
PS. Although, to be honest, Creative isn't claiming it will be a PC gaming sound card.

I don't feel Creative has a clear idea of what it wants to make at this point, only that they don't want to front the R&D costs and that they would really need it to be very successful.

If I had $10 million dollars (here's hoping), I'd absolutely be able to build up an architecture design team, contract out an ASIC design house to handle the finer details, and hand them that on a silver platter, honestly.

Reply 43 of 51, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
darry wrote on Today, 05:39:

I don't feel Creative has a clear idea of what it wants to make at this point, only that they don't want to front the R&D costs and that they would really need it to be very successful.

Google&Samsung. Introducing Eclipsa Audio: immersive audio for everyone
The problem isn't R&D- the mathematical models have been around for a long time.
The problem is creating a patent-clear implementation and promoting it as a standard.
If this implementation isn't in Unreal Engine and Unity, it's a waste of money.
And Creative has neither the money nor the influence.

lepidotós wrote on Today, 06:16:

If I had $10 million dollars (here's hoping), I'd absolutely be able to build up an architecture design team, contract out an ASIC design house to handle the finer details, and hand them that on a silver platter, honestly.

And then the gaming industry says, "It's cool, but complicated."
There are no specialists in your technology, and they's expensive. No one wants to learn programming for it—it's too specialized.
Besides, it doesn't work on smartphones because it's not cross-platform.
Next, please.

Creative's EAX 3/4/5, for which they already wanted money, are proof of that.
They failed to do this during their heyday, so I don't see any chance of success now.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 44 of 51, by darry

User metadata
Rank l33t++
Rank
l33t++
shevalier wrote on Today, 06:18:
Google&Samsung. Introducing Eclipsa Audio: immersive audio for everyone The problem isn't R&D- the mathematical models have been […]
Show full quote
darry wrote on Today, 05:39:

I don't feel Creative has a clear idea of what it wants to make at this point, only that they don't want to front the R&D costs and that they would really need it to be very successful.

Google&Samsung. Introducing Eclipsa Audio: immersive audio for everyone
The problem isn't R&D- the mathematical models have been around for a long time.
The problem is creating a patent-clear implementation and promoting it as a standard.
If this implementation isn't in Unreal Engine and Unity, it's a waste of money.
And Creative has neither the money nor the influence.

Patents do expire and I do wonder how recent (and patent encumbered ) the needed IP is.

That being said, R&D encompasses more than sound processing algorithm development. Integration into hardware and/or implementing the algorithms in software (and optimizing), designing and/or adapting and/or "badge engineering" a piece of hardware and designing an API and possibly some middleware all can be considered R&D costs. And a lot of that and more might qualify for tax breaks and/or grants for R&D, depending on jurisdiction.

I do agree about the lack of money and influence.

Reply 45 of 51, by lepidotós

User metadata
Rank Member
Rank
Member
shevalier wrote on Today, 06:18:
Google&Samsung. Introducing Eclipsa Audio: immersive audio for everyone The problem isn't R&D- the mathematical models have been […]
Show full quote
darry wrote on Today, 05:39:

I don't feel Creative has a clear idea of what it wants to make at this point, only that they don't want to front the R&D costs and that they would really need it to be very successful.

Google&Samsung. Introducing Eclipsa Audio: immersive audio for everyone
The problem isn't R&D- the mathematical models have been around for a long time.
The problem is creating a patent-clear implementation and promoting it as a standard.
If this implementation isn't in Unreal Engine and Unity, it's a waste of money.
And Creative has neither the money nor the influence.

lepidotós wrote on Today, 06:16:

If I had $10 million dollars (here's hoping), I'd absolutely be able to build up an architecture design team, contract out an ASIC design house to handle the finer details, and hand them that on a silver platter, honestly.

And then the gaming industry says, "It's cool, but complicated."
There are no specialists in your technology, and they's expensive. No one wants to learn programming for it—it's too specialized.
Besides, it doesn't work on smartphones because it's not cross-platform.
Next, please.

Creative's EAX 3/4/5, for which they already wanted money, are proof of that.
They failed to do this during their heyday, so I don't see any chance of success now.

On the contrary; I'd be working within a feature that's currently gaining interest, sound ray tracing, which is currently being added in at least plugin form to the various game engines already; there's one 1.5m-view video already on a plugin being written for Unreal Engine and Godot that does that, only right now they use the CPU to do those but that's only so scalable given the growing and growing rate of deceleration of PC hardware year over year since 2003 or so, especially if games keep growing in resource demands. It would especially be good for VR even if it's mainly installations that have VR in an arcade setting (you know, the ones with the treadmills). Especially when sound in games tends to correspond with things happening; a game like Teardown I think would be an excellent use of audio RT, but it's still pretty hard on CPUs even without it, I can only imagine how calculating the many noises from the constantly deforming environment and the player's interactions with it would slow the whole thing to a crawl.

Ultimately, I don't see not working on smartphones as being that big a blocker; even if smartphones are a large sector of the gaming space a single, $450 or so sound card is the same as a few years of someone buying MTX in a mobile game, and I could foresee maybe a million B2C installs and maybe 10-25K B2B ones. And of course, you can't plug a 5090 into a smartphone either.

Reply 46 of 51, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

Wavetracing "accelerator" would be the final nail in Creative coffin. It's definitely something out of Kickstarter scale (for slowly dying long forgotten hardware company anyway) too.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 47 of 51, by lepidotós

User metadata
Rank Member
Rank
Member

Either that, or they become an AIB vendor of it, if whoever gets to it first wanted to play with that model. Maybe have Creative give it a shot for the PC gaming crowd and FiiO or another company like that give it a shot for the audiophile or workstation crowd, maybe using RT to simulate more accurate room sounds or positional audio? I dunno how popular that would be but it would be one potential use case. Possibly good for signal processing for audio input, or a low cost option for budget filmmakers simulating scenes with green screens to be able to simulate the audio as it would actually sound. Obviously I'm under no illusions that it would be anything but niche, but I think the sort of niche for a card like that would be significantly bigger than for current sound cards, which for the most part kind of just do the same thing as chipset audio, just better. At least in this case it actually offers something your motherboard alone wouldn't.

Reply 48 of 51, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
lepidotós wrote on Today, 07:20:

only right now they use the CPU

Nobody wants to programming and support hardware offloading of anything (AES encryption, network traffic, audio streams). Only in the case of a critical shortage of computing power in general-purpose CPU.
As soon as performance starts to suffice, everything switches to software.

Ray tracing in games promises to reduce development costs.
What the designer previously did statically now was to be calculated automatically at runtime.
This means the designer will be fired. And his salary will be redirected to the marketing budget. 😀
If a wave-tracing technology for audio is developed that automatically builds scenes (i.e., takes a game level, applies a bit of magic AI , and the output happens automatically), it will become widespread.

If, like in EAX, you have to manually specify a "water-metal-wood" preset for each surface, then everyone knows how that ends.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 49 of 51, by lepidotós

User metadata
Rank Member
Rank
Member
shevalier wrote on Today, 08:14:
lepidotós wrote on Today, 07:20:

only right now they use the CPU

Nobody wants to programming and support hardware offloading of anything (AES encryption, network traffic, audio streams). Only in the case of a critical shortage of computing power in general-purpose CPU.
As soon as performance starts to suffice, everything switches to software.

Sure, that's fair, but I can only imagine that scene complexity plays a part here. The software approach uses a relatively low-density voxel map of the scene (and so far has mainly been demonstrated, link provided for the benefit of future readers, with simplistic scenes), which you'd have to recalculate each time the scene itself changes (e.g. a car moves or a box gets broken or something like that). Nothing crazy, but if you're doing simulation of hundreds or thousands of other things at a time (say, NPCs to make the sounds? Not an unrealistic number for a Dynasty Warriors or city simulation game; Dynasty Warriors: Origins in the later levels aims to hit over 10,000 NPCs individually simulated at a time), given the slow rate of change generation over generation unless CPUs start going for more and more cores, like we start getting into 12 or 16 core CPUs in order to offset the eventual stagnation, I can see it start to really bog the CPU down.

shevalier wrote on Today, 08:14:
Ray tracing in games promises to reduce development costs. What the designer previously did statically now was to be calculated […]
Show full quote

Ray tracing in games promises to reduce development costs.
What the designer previously did statically now was to be calculated automatically at runtime.
This means the designer will be fired. And his salary will be redirected to the marketing budget. 😀
If a wave-tracing technology for audio is developed that automatically builds scenes (i.e., takes a game level, applies a bit of magic AI , and the output happens automatically), it will become widespread.

If, like in EAX, you have to manually specify a "water-metal-wood" preset for each surface, then everyone knows how that ends.

Yeah, it automatically makes a scene map. I haven't heard whether it automagically creates material profiles, probably not, which would be a possible advantage to a well-designed hardware solution. It does not, it seems, according to a comment response by the guy who wrote the engine. A custom designed architecture specifically for that purpose could likely also just access the scene geometry for the more simplistic RT implementations rather than asking for a voxelized representation of it, saving more CPU power than just the rays themselves. Additionally, game development was just... less easy and less big-budget back then. I think for most games like indies the developers would be fine with simplistic RT (I should point out a lot of developers currently also put time into making things sound as they should for each material manually anyway; using this as the basis for automatically assigned material presets for at least the early days would play a role in the transition) and the super-big-budget ones likely would have someone on staff whose job it is to do that sort of thing. I mean, with budgets in the hundreds of millions, they can afford it. They already have a few people doing lighting even with RT, either for people running on machines without RT (I think the GTX 1660 is still like number 2 or 3 most popular graphics card at this point, and even for those with 3060s and 4060s might still turn RT off for better performance) or just to split up the workload of scene composition, light placement, and color palette so as to take less time. Plus, even if you have to define the materials, it still helps significantly in the interactions between materials the same way ray tracing less so speeds up placing lights and moreso speeds up/enables how they interact with other lights and the rest of the world, including the player. Instead of having to account for, say, wood, metal, metal hitting wood, and wood hitting metal, you could have the wavetracing core handle how those interact.

The Serpent Rider wrote on Today, 07:33:

Wavetracing "accelerator" would be the final nail in Creative coffin. It's definitely something out of Kickstarter scale (for slowly dying long forgotten hardware company anyway) too.

And yeah, I think the thing I mentioned earlier, being a card with a hardware synth on it, is significantly more likely for a company in Creative's current shoes. Would probably be interesting to musicians, I'd pick one up if it was one for that purpose. Of course, if it's just nostalgia-bait and they're doing it in software there's basically no reason to go for it, but you know, that's the best case realistic scenario I can see. I have a keyboard that sends out MIDI over USB (type C) that could easily be used as a controller for a hardware synth. Plus, they can justify a couple hundred (maybe $229 would be reasonable?) for it given the prices of synths that have attached keyboards.

But hey, regarding taking the risk, Creative is going to die if it keeps its current trajectory anyway, so what's the harm? That it shuts its doors in 2029 as opposed to 2034? It dies in two out of three scenarios and the third isn't the one where they rest on their long-withererd laurels. In terms of audience interest, here's a very simplistic implementation of wavetracing in Minecraft (based on the previously linked video, which is a lot easier to implement and is completely appropriate given it's, y'know, Minecraft) that's gotten 22,000 downloads in about three months.

Reply 50 of 51, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
lepidotós wrote on Today, 08:39:

Sure, that's fair, but I can only imagine that scene complexity plays a part here. The software approach uses a relatively low-density voxel map of the scene (and so far has mainly been demonstrated, link provided for the benefit of future readers, with simplistic scenes), which you'd have to recalculate each time the scene itself changes (e.g. a car moves or a box gets broken or something like that). Nothing crazy, but if you're doing simulation of hundreds or thousands of other things at a time (say, NPCs to make the sounds? Not an unrealistic number for a Dynasty Warriors or city simulation game; Dynasty Warriors: Origins in the later levels aims to hit over 10,000 NPCs individually simulated at a time), given the slow rate of change generation over generation unless CPUs start going for more and more cores, like we start getting into 12 or 16 core CPUs in order to offset the eventual stagnation, I can see it start to really bog the CPU down.

In theory, you're right, but it all falls apart because of... physics.
Two sine waves passing through a real device create combined frequencies (intermodulation).
While this isn't a problem for modern DACs and op amps, it's a serious issue for the audio drivers themselves.
Sometimes, completely by chance, you manage to create an IEM Crinacle Zero 2 for $17.
But most often it’s Dan Clark NOIRE XO for $1.2K.
Sell in a bundle with a $100 sound card, something like the $300 Sennheiser 560s?
You'll go broke.
Don't sell it as a set—half the users on Reddit (with headphones from Aliexpress) will claim they can't hear the sound effects.
Only Apple can afford to switch everyone to absolutely identical and predictable headphones in its ecosystem.

All technologies that included the word "3D" in their names and required specialized equipment are now not (widely) available- 3D TVs, VR-glasses, 3D sound cards
Therefore, mixing more than 30 sources simultaneously is pointless.
Psychoacoustics also suggests that the discernibility of individual sounds against a background of noise is finite.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 51 of 51, by lepidotós

User metadata
Rank Member
Rank
Member
shevalier wrote on 38 minutes ago:
In theory, you're right, but it all falls apart because of... physics. Two sine waves passing through a real device create combi […]
Show full quote
lepidotós wrote on Today, 08:39:

Sure, that's fair, but I can only imagine that scene complexity plays a part here. The software approach uses a relatively low-density voxel map of the scene (and so far has mainly been demonstrated, link provided for the benefit of future readers, with simplistic scenes), which you'd have to recalculate each time the scene itself changes (e.g. a car moves or a box gets broken or something like that). Nothing crazy, but if you're doing simulation of hundreds or thousands of other things at a time (say, NPCs to make the sounds? Not an unrealistic number for a Dynasty Warriors or city simulation game; Dynasty Warriors: Origins in the later levels aims to hit over 10,000 NPCs individually simulated at a time), given the slow rate of change generation over generation unless CPUs start going for more and more cores, like we start getting into 12 or 16 core CPUs in order to offset the eventual stagnation, I can see it start to really bog the CPU down.

In theory, you're right, but it all falls apart because of... physics.
Two sine waves passing through a real device create combined frequencies (intermodulation).
While this isn't a problem for modern DACs and op amps, it's a serious issue for the audio drivers themselves.
Sometimes, completely by chance, you manage to create an IEM Crinacle Zero 2 for $17.
But most often it’s Dan Clark NOIRE XO for $1.2K.
Sell in a bundle with a $100 sound card, something like the $300 Sennheiser 560s?
You'll go broke.
Don't sell it as a set—half the users on Reddit (with headphones from Aliexpress) will claim they can't hear the sound effects.
Only Apple can afford to switch everyone to absolutely identical and predictable headphones in its ecosystem.

All technologies that included the word "3D" in their names and required specialized equipment are now not (widely) available- 3D TVs, VR-glasses, 3D sound cards
Therefore, mixing more than 30 sources simultaneously is pointless.
Psychoacoustics also suggests that the discernibility of individual sounds against a background of noise is finite.

I can see the argument and wouldn't be shocked if it worked out that way, and have known about that kind of thing for a long time (Adam Neely has a pretty good surface-level video or two on psychoacoustics I watched as an intro to it), but I don't really know, I feel like if you can have decent audio of real life city scenes without acoustic beats or weird harmonics from interference that a simulated city scene is also likely. Maybe have to test it with the Virtual Barbershop method to make it more applicable. And while you can't guarantee what sort of speakers or headphones they're going into, you can generally be assured that a sound card buyer is probably going to be toward the latter end of that spectrum. I will say that there's not zero positional awareness you can get on the former end though, I bought my current speaker set, a $12 pair of Boston Acoustics something or other, and even if it's not as detailed as even an equivalent 5.1 or the like it still offers decent spatial awareness and there's been times that recorded audio has sounded real out of it, at least to my ears.

Also, I doubt $100. Probably $350 to $375 if you wanted to be aggressive, given the R&D budget for such a project. Steep, maybe, but the much less... adventurous we'll call it AE-9 is currently still in the $400s so not exactly unprecedented.