VOGONS

Common searches


First post, by m1so

User metadata
Rank Member
Rank
Member

Were these demos ever released or were they just a tease to put in videos and new articles? If they were, where can I download a raytracing renderer for Quake 3/4/ Quake Wars?

Reply 1 of 19, by obobskivich

User metadata
Rank l33t
Rank
l33t

I don't ever remember them being available directly to the public, but I remember Intel showing off their Quake RT demo back when they were talking about Larrabee all the time. If I remember right it's also tied to that hardware, which was never released to the public. The whole project can be viewed here: http://www.qwrt.de/

It does not appear that you can download anything from them though, and it is also unlikely it would run very well unless you have substantial processing hardware at your disposal (building a 24-core Dunnington isn't that much of a stretch nowadays, but all of that for ~25 fps at 720p seems nutters). As far as I know OpenRT, or at least parts of it, were (maybe still are) available online though, and there are apparently demos that you can have rendered in real-time with it (versus watching videos) too. Nothing I've ever had a mind to do at home though, mostly because I wouldn't expect the performance to be very good.

Reply 2 of 19, by m1so

User metadata
Rank Member
Rank
Member

25 fps at 720p is "nutters"? Man, are you aware of the general raytracing performance? In the raytracing world, 60 PIXELS per second is considered good rendering performance, not frames. The CGI in Tranformers took 48 hours per frame to be rendered. There were frames that took around 100 hours to render. I want to see a complex game raytraced on my PC, not expected "buttery smoothies" or actually play it to get a lot of frags.

Dunnigton is a very old CPU by now, you could probably do it now far cheaper. Also, the first game, Quake III was not rendered on Intel hardware, it was rendered on a cluster of 20 Athlons 1800+ by students at some university. I am quite sure my i7 875k 2.93 Ghz is just about as fast if not even faster.

I want to see a raytracing demo on PC that is not just reflective spheres and is somehow interactive, that's all. With raytracing, 0.5 fps would be "good performance", rendering a single reflective sphere with common non-realtime raytracers takes minutes for god's sake.

Reply 3 of 19, by obobskivich

User metadata
Rank l33t
Rank
l33t
m1so wrote:

25 fps at 720p is "nutters"? Man, are you aware of the general raytracing performance? In the raytracing world, 60 PIXELS per second is considered good rendering performance, not frames. The CGI in Tranformers took 48 hours per frame to be rendered. There were frames that took around 100 hours to render. I want to see a complex game raytraced on my PC, not expected "buttery smoothies" or actually play it to get a lot of frags.

Yes I'm aware of what CGI for movies entails; it's a very different scenario than gaming. My comment on it being nutters is in the context of actually playing Quake, not just playing around with RT. Sorry if I misunderstood your original post. 😊

Dunnigton is a very old CPU by now, you could probably do it now far cheaper. Also, the first game, Quake III was not rendered on Intel hardware, it was rendered on a cluster of 20 Athlons 1800+ by students at some university. I am quite sure my i7 875k 2.93 Ghz is just about as fast if not even faster.

Your i7 would not likely perform better than the Dunnington workstation or the Athlon cluster, because it cannot do 20-24 threads simultaneously (and it won't offer the 6x performance advantage it would need to offset that). It would certainly perform better than four cores of either though.

I want to see a raytracing demo on PC that is not just reflective spheres and is somehow interactive, that's all. With raytracing, 0.5 fps would be "good performance", rendering a single reflective sphere with common non-realtime raytracers takes minutes for god's sake.

http://www.geforce.com/games-applications/pc- … s/design-garage

Reply 4 of 19, by m1so

User metadata
Rank Member
Rank
Member

The Passmark score for the Athlon XP 1800+ is 312. Passmark for the i7 875k is 5461. That is more than 17x more. So sorry man, but you've drastically underestimated modern CPUs. Besides it has hyperthreading so 8 threads.

Thanks for the garage demo, but I've already found a better one http://www.pouet.net/prod.php?which=61211 . It's not exactly "realistic" in style, but awesome. I'd still like a game demo better through.

Reply 5 of 19, by obobskivich

User metadata
Rank l33t
Rank
l33t
m1so wrote:

The Passmark score for the Athlon XP 1800+ is 312. Passmark for the i7 875k is 5461. That is more than 17x more. So sorry man, but you've drastically underestimated modern CPUs. Besides it has hyperthreading so 8 threads.

Hyperthreading is not directly equivalent to having more physical processors, and Passmark is a theoretical synth benchmark that doesn't assess any directly observable real-world scenario and heavily favors modern hardware (which is explained in their "Test Information" section). If you want to make accurate comparisons of this hardware, you'd need to find metrics based on rendering packages (ideally using the same renderer you'd like to use). Here's a quick'n'dirty comparison of a Wolfdale to an i5 from the generation after your i7 CPU which includes a rendering benchmark:
http://www.cpu-world.com/Compare/538/Intel_Co … i5_i5-655K.html

While it isn't a straight-up of the Dunnington to the i7, it's perhaps more accurate because it's comparing dual-core to dual-core (and the Nehalem does have HT), and they're clocked fairly similarly. You'll notice that in the rendering benchmark, the gains of the newer CPU are relatively minimal (<30% (less in single-thread, showing that hyper-threading is helping a little bit)). Twenty-four such cores will perform better than the four on an i7 as a result. Now of course we could go back to Passmark, and see that the i5 fares better (roughly 55% higher score), although if we look at the Passmark single-thread score the results aren't as impressive (barely 16% higher score).

The AthlonXP scenario is more extreme, although I wasn't able to find comparable rendering benchmarks for the AthlonXP (which is not surprising). But even using the Passmark numbers, the single-thread result is not "17x more" than for the i7, it's around half to one-third. Like I said, I would expect the i7 to perform better than a quartet of K7s, but not twenty or more. Especially because it isn't an apples-to-apples comparison - the cluster will have more throughput than the single machine, Hyperthreading or not.

All of this also ignores that generally consumer hardware won't stand up to the 24x7 100% duty cycle that renderfarms have to deal with.

Thanks for the garage demo, but I've already found a better one http://www.pouet.net/prod.php?which=61211 . It's not exactly "realistic" in style, but awesome. I'd still like a game demo better through.

Is this one tied to CUDA or nVidia hardware? I looked for others after finding that nVidia demo, and keep finding things that are tied to CUDA or some other nVidia functionality... 😵 🤣

Reply 6 of 19, by Skyscraper

User metadata
Rank l33t
Rank
l33t

My systems can run 24 threads with HT.
HT may not be the same as having real cores but my Xeon systems makes up for that with high frequency.

Im pretty sure that my systems would beat a 24 core Dunnington/Penryn system so that type of performance is not "out of reach".
My systems did run folding@home 24/7 for months so they can do continuous heavy load for extended periods of time.

My CPUs might be Xeons but the boards are not really server/workstation boards, Im pretty sure the EVGA SR2 use the same components as single socket socket 1366 boards.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 7 of 19, by obobskivich

User metadata
Rank l33t
Rank
l33t
Skyscraper wrote:

My system can run 24 threads with HT.
HT may not be the same as having real cores but my system makes up for that with high frequency.

Im pretty sure that my system would beat a 24 core Dunnington/Penryn system so that type of performance is not "out of reach".

The system in your signature with the SR-2 should perform better than the Dunnington no doubt. 🤣 But the Dunnington server shouldn't be considered out of reach either - look up Dell Poweredge R900, they're commonly around $400-$600 with very substantial configurations (hence my comment that it isn't much of a stretch). Of course modern high-end platforms will perform better, but pricing will generally start in the five-figure range and go up from there.

Reply 8 of 19, by Skyscraper

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:
Skyscraper wrote:

My system can run 24 threads with HT.
HT may not be the same as having real cores but my system makes up for that with high frequency.

Im pretty sure that my system would beat a 24 core Dunnington/Penryn system so that type of performance is not "out of reach".

The system in your signature with the SR-2 should perform better than the Dunnington no doubt. 🤣 But the Dunnington server shouldn't be considered out of reach either - look up Dell Poweredge R900, they're commonly around $400-$600 with very substantial configurations (hence my comment that it isn't much of a stretch). Of course modern high-end platforms will perform better, but pricing will generally start in the five-figure range and go up from there.

Its also possible to build crazy Opteron systems for very little money these days.
I think I will stick with my SR-2 systems until they are too slow for gaming and stuff.

I bought the first SR-2 board when it was released 2010, it was not a bad investment at all.
Now I have hoarded more boards and upgraded to the fastest CPUs which perhaps isnt as good investments 😀

I would be glad if we get more games and stuff that benefits from more cores. I have some hope that the new generation consoles will help with that.

New PC: i9 12900K @5GHz all cores @1.2v. MSI PRO Z690-A. 32GB DDR4 3600 CL14. 3070Ti.
Old PC: Dual Xeon X5690@4.6GHz, EVGA SR-2, 48GB DDR3R@2000MHz, Intel X25-M. GTX 980ti.
Older PC: K6-3+ 400@600MHz, PC-Chips M577, 256MB SDRAM, AWE64, Voodoo Banshee.

Reply 9 of 19, by obobskivich

User metadata
Rank l33t
Rank
l33t
Skyscraper wrote:
Its also possible to build crazy Opteron system for very little money these days. But I think I will stick with my SR-2 systems […]
Show full quote

Its also possible to build crazy Opteron system for very little money these days.
But I think I will stick with my SR-2 systems until they are too slow for gaming and stuff.

I bought the first SR-2 board when it was released 2010, it was not a bad investment at all.
Now I have hoarded more boards and upgraded to the fastest CPUs which perhaps isnt as good investments 😀

I would be glad if we get more games and stuff that benefits from more cores. I have some hope that the new generation consoles will help with that.

I remember back in 2008 I was considering D5400XS, and benchmarks showed it being of little-to-no advantage for gaming over a competent dual or quad-core. I ended up passing on it as a result - I wasn't unhappy with the machine I built, but I think the board itself had better utility in terms of expansion options than the single-socket boards at the time. From what I've read, the SR-2 and SR-X are something of spiritual successor to the D5400XS. I don't think there's any substantial advantage in modern games even (as far as I know the number of games that heavily benefit from even a quad-core is still fairly low), but there's plenty of non-gaming tasks that will happily use the extra CPUs.

As far as the consoles - it would be nice if they'd usher in an era of SMP-optimized applications, as well as bring HSA into the mainstream, but I think for most applications neither is really necessary unless you want to load them up with multimedia decorations (read: bloat). 😵

Reply 10 of 19, by 5u3

User metadata
Rank Oldbie
Rank
Oldbie
obobskivich wrote:

Thanks for the garage demo, but I've already found a better one http://www.pouet.net/prod.php?which=61211 . It's not exactly "realistic" in style, but awesome. I'd still like a game demo better through.

Is this one tied to CUDA or nVidia hardware? I looked for others after finding that nVidia demo, and keep finding things that are tied to CUDA or some other nVidia functionality... 😵 🤣

The 5 faces tracer isn't exclusive to nvidia/CUDA. The coder has written a couple of nice "making of" blog entries with lots of technical info.

Reply 11 of 19, by m1so

User metadata
Rank Member
Rank
Member

The Quake III guys didn't make a true render farm either. They weren't running it 100 percent/24/7. Normally I'd agree with your criticism quoting single thread performance, but raytracing is an "embarrasingly parallel" problem - it should fly (well, comparatively to a 2004 singlecore) on a modern multicore CPU. It also helps that the cores are close together, I can't believe that their system of connected PCs wasn't plagued with latency.

The demo runs on AMD graphics hardware too. It is basically a regular scenedemo. It has an abstract but nice theme of a glass city and breaking, and it uses GPU raytracing as it recommends a GTX 680 or a Radeon 7970. Fraps displays no fps, but it runs okay on my GTX 660 at 1080p (720p recommended by author), by eye perhaps 10-20 fps with drops to lower during especially heavy scenes of structure destruction, but hey, it's a true raytracing demo on a consumer level upper midrange GPU.

Reply 12 of 19, by obobskivich

User metadata
Rank l33t
Rank
l33t
5u3 wrote:

The 5 faces tracer isn't exclusive to nvidia/CUDA. The coder has written a couple of nice "making of" blog entries with lots of technical info.

Very cool, thanks.

m1so wrote:

The Quake III guys didn't make a true render farm either. They weren't running it 100 percent/24/7. Normally I'd agree with your criticism quoting single thread performance, but raytracing is an "embarrasingly parallel" problem - it should fly (well, comparatively to a 2004 singlecore) on a modern multicore CPU. It also helps that the cores are close together, I can't believe that their system of connected PCs wasn't plagued with latency.

Yes, it is parallel, and that's exactly why the cluster will benefit it. The "issue" with the Passmark scores for this analysis is that they're reflecting the parallelism of the i5 or i7, but not of the K7 cluster. The ST performance is applicable if you do a little bit of rough math - clone it out twenty-plus times and knock 10-20% off the top and you'd have a closer figure. A single Athlon against an i7 or something in this kind of task *would* be pretty bad-off, but the gang of them will do much better. Of course any 1:1 comparison with the i7 will see the i7 win (e.g. 20 i7s against 20 Athlons).

As far as latency, it shouldn't be a big problem - renderers differ from beowulf clusters in that they aren't trying to be an SSI. Instead it's a more distributed model; there's a central server that sends out jobs to each node, and waits for the node to complete. Having a very high-performance I/O system like Infiniband wouldn't improve computational performance on the nodes, but over a very large project (like a feature-length film) may reduce total time to complete because data would go between each point faster. That said, I don't think modern CG tends towards such equipment because of cost more than anything else. It would certainly be interesting to see that tested out though - I know AMD had some OpenCL presentations showing Kaveri being able to outperform an HD 5870 at some "light" GPGPU tasks precisely because of the lower latency you've mentioned, but for more complex tasks where the processor has to chew on the data longer, the HD 5870 still performed better.

The demo runs on AMD graphics hardware too. It is basically a regular scenedemo. It has an abstract but nice theme of a glass city and breaking, and it uses GPU raytracing as it recommends a GTX 680 or a Radeon 7970.

Cool. I'll have to give that one a look on my Radeon. 😀

Fraps displays no fps

Fraps is looking for a specific DirectX call to estimate the frame-rate, and the RT renderer isn't using DirectX. 😊

If the application here won't provide an estimate of it's frame-rate, the only other option would be direct measurement, which requires an additional computer and some extra hardware. 😵

, but it runs okay on my GTX 660 at 1080p (720p recommended by author), by eye perhaps 10-20 fps with drops to lower during especially heavy scenes of structure destruction, but hey, it's a true raytracing demo on a consumer level upper midrange GPU.

That's very impressive, and very good to hear that it works. Certainly an improvement over a rack of machines! 😎 😲

Did the nVidia garage demo work on the GTX 660 very well? Or did you not bother with it?

Reply 13 of 19, by AlphaWing

User metadata
Rank Oldbie
Rank
Oldbie

I can't find a link to it anymore...
But there is a benchmark called Realstorm2006, that does raytracing in real time...
It can bring a modern PC to its knees still, but its single threaded.

Reply 14 of 19, by kolano

User metadata
Rank Oldbie
Rank
Oldbie
AlphaWing wrote:

I can't find a link to it anymore...
But there is a benchmark called Realstorm2006, that does raytracing in real time...
It can bring a modern PC to its knees still, but its single threaded.

It was by the DemoScene group "Federation against nature". Here's the Internet Archive of it's old web page...
https://web.archive.org/web/20070718060505/ht … .realstorm.com/

Unfortunately the download links there are broken. I was able to find a copy here...
http://www.mmnt.net/db/0/0/91.196.102.67/Inco … nch/Bench%20CPU

Their older raytraced demos (i.e. Nature Suxx) can still be found on Pouet...
http://www.pouet.net/groups.php?which=216

Last edited by kolano on 2015-01-17, 19:49. Edited 1 time in total.

Eyecandy: Turn your computer into an expensive lava lamp.

Reply 15 of 19, by kolano

User metadata
Rank Oldbie
Rank
Oldbie

Regarding the discussion on the performance characteristics for raytracing on different CPUs, might the POVRay benches be more relevant...
http://new.haveland.com/povbench/graph.php

Though unfortunately, it seems like they have few results for modern processors (i.e. it cuts off around the i5 760 and there are chips twice as fast or more now).

Eyecandy: Turn your computer into an expensive lava lamp.

Reply 16 of 19, by mr_bigmouth_502

User metadata
Rank Oldbie
Rank
Oldbie

I think it would be neat if Intel got back into the dedicated graphics market. The Intel HD series onboard GPUs are nothing special, but could you imagine what they could come up with for a dedicated GPU? I'm thinking they could provide a good low-mid range alternative to AMD and Nvidia's offerings. Some would say that that's where their onboard GPUs are currently, but I think they could do better if they weren't hampered by the limitations of onboard GPUs.

Reply 17 of 19, by smeezekitty

User metadata
Rank Oldbie
Rank
Oldbie
mr_bigmouth_502 wrote:

I think it would be neat if Intel got back into the dedicated graphics market. The Intel HD series onboard GPUs are nothing special, but could you imagine what they could come up with for a dedicated GPU? I'm thinking they could provide a good low-mid range alternative to AMD and Nvidia's offerings. Some would say that that's where their onboard GPUs are currently, but I think they could do better if they weren't hampered by the limitations of onboard GPUs.

Intel's GPUs have been pretty lousy honestly. Even now they are mostly behind AMDs APUs
And the drivers are not really optimized for the newest standards, applications and games.

Reply 18 of 19, by F2bnp

User metadata
Rank l33t
Rank
l33t

I know this is slightly off-topic, but calling Intel's HD series "nothing special" is pretty ignorant. I for one am very happy with integrated GPUs from both Intel and AMD. Intel HD is indeed inferior to the AMD APUs in almost every way. However, it was very very recently, that gaming on integrated GPUs, whether they were on the same package as the CPU or Motherboard, was pretty terrible.
Drivers were shit and performance was just not there.

Today, you can build a budget PC or casual gaming PC or even a very powerful PC without the need for a powerful GPU and you get so much more bang for your buck. I'm very grateful for this advancement and it's not going away in the slightest.

Reply 19 of 19, by obobskivich

User metadata
Rank l33t
Rank
l33t

I'll agree with the modern Intel graphics being pretty impressive, for what they are. Especially compared to the GMA and "Extreme" graphics of a few years ago. With GT2 graphics my Ultrabook can run full Aero Glass at HD resolutions and still achieve 8-9 hour battery life with no issues, and it can also handle new-ish 3D applications without completely choking. It would be very interesting if the recent IGPs, and their increasing expertise with graphics in general, are signs of bigger things to come. Additional competition is always good for the consumer.