VOGONS


Reply 100 of 243, by Irinikus

User metadata
Rank Oldbie
Rank
Oldbie

The fans aren't currently available over here in SA, so I'll have to wait unfortunately! 🙁

I've downgraded to Windows 7 64-Bit, as I do agree that the interface in windows 7 is better than that of 8!

Here's the Windows 7 Experience Index for this machine: (The CPU's and RAM really let it down!)

cveOHiG.png

And here's the 3DMark05 Score: (Reasonable for a P4-Type System!)

QQtivCU.png

Last edited by Irinikus on 2023-09-04, 17:46. Edited 1 time in total.

YouTube

Reply 101 of 243, by Irinikus

User metadata
Rank Oldbie
Rank
Oldbie

I tried disabling SLI and set the GPU's up in this way: (And this improved things a bit! 😀 )

m2B8u61.png

Here's the new Windows Experience Index: (The Graphics improved by 2 points!)

OAAiicn.png

Here's the new 3DMark05 Score:

j3lBmtA.png

YouTube

Reply 103 of 243, by luckybob

User metadata
Rank l33t
Rank
l33t
Irinikus wrote on 2023-09-04, 16:43:

I've UPgraded to Windows 7 64-Bit, as I do agree that the interface in windows 7 is better than that of 8!

there, i fixed your post.

Also did you upgrade to maximum ram, or are you still at 8gb? I dont recall? MORE RAM IS BETTER!

It is a mistake to think you can solve any major problems just with potatoes.

Reply 104 of 243, by Irinikus

User metadata
Rank Oldbie
Rank
Oldbie
luckybob wrote on 2023-09-04, 19:16:
Irinikus wrote on 2023-09-04, 16:43:

I've UPgraded to Windows 7 64-Bit, as I do agree that the interface in windows 7 is better than that of 8!

there, i fixed your post.

Also did you upgrade to maximum ram, or are you still at 8gb? I dont recall? MORE RAM IS BETTER!

You're right, it's an upgrade! 😀

I'm still running 8GB of RAM for now. (I would only consider upgrading to 16GB if I find a complete matched 16GB kit.)

YouTube

Reply 105 of 243, by luckybob

User metadata
Rank l33t
Rank
l33t

Fair enough. 8GB is "enough" but it is a neurosis of mine. ALL ram slots need to be filled, and ALL ram slots need to me matched. *eye twitch* I've even bought extra ram because the chips were different between modules.

It is a mistake to think you can solve any major problems just with potatoes.

Reply 106 of 243, by H3nrik V!

User metadata
Rank Oldbie
Rank
Oldbie
luckybob wrote on 2023-09-04, 19:16:
Irinikus wrote on 2023-09-04, 16:43:

I've UPgraded to Windows 7 64-Bit, as I do agree that the interface in windows 7 is better than that of 8!

there, i fixed your post.

You beat me to it, Bob 🤣

Please use the "quote" option if asking questions to what I write - it will really up the chances of me noticing 😀

Reply 108 of 243, by acl

User metadata
Rank Oldbie
Rank
Oldbie
chinny22 wrote on 2023-09-05, 06:49:

SLI and SMP
I love the way you think!

"Two is one and one is none"

This applies to a lot of situations/systems/jobs
That does not really depicts SMP and SLI. But i like the duplicated things too.
Probably from my job

"Hello, my friend. Stay awhile and listen..."
My collection (not up to date)

Reply 109 of 243, by Irinikus

User metadata
Rank Oldbie
Rank
Oldbie
luckybob wrote on 2023-09-04, 19:54:

Fair enough. 8GB is "enough" but it is a neurosis of mine. ALL ram slots need to be filled, and ALL ram slots need to me matched. *eye twitch* I've even bought extra ram because the chips were different between modules.

I suffer from the same obsession with perfection! 😀 (It pains me if things don't look the way they're supposed to!)

chinny22 wrote on 2023-09-05, 06:49:

SLI and SMP
I love the way you think!

Thanks Man! 😀

I'm trying to get the absolute maximum out out of this system by offering it all the features possible, such as dedicated PhysX support to make up for poor CPU compute performance in games which support it! (As the system needs all the help it can get! 😀 )

acl wrote on 2023-09-05, 11:24:
"Two is one and one is none" […]
Show full quote

"Two is one and one is none"

This applies to a lot of situations/systems/jobs
That does not really depicts SMP and SLI. But i like the duplicated things too.
Probably from my job

Two is indeed better than one!

I've found that dedicating one of the GPU's to Graphics and the other to PhysX seems to be the best configuration here!

I'm definitely going to try a GTX Titan Z in this system when the opportunity to purchase one presents itself! (I'll set it up in the same way as I've setup the GTX 690!)

The only thing that bothers me is that the Titan Z is clocked at a lower frequency than the GTX 690, and if you take a look at the GPU usage that I got in Crysis, the Titan Z may actually offer reduced performance in this case!?

uqy64Zj.jpg

DtuR2dm.png

YouTube

Reply 111 of 243, by Irinikus

User metadata
Rank Oldbie
Rank
Oldbie
The Serpent Rider wrote on 2023-09-05, 17:09:

Not sure how you've came to such conclusion.

It all depends on how GPU usage is determined, if it's based on how many of the cores are fired up, then the GTX 690 may have an advantage due to the higher core speed.

We'll see when I carry out the experiment!

I really hope that I'm wrong!

YouTube

Reply 112 of 243, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

It doesn't. Titan has almost twice the amount of CUDA cores.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 114 of 243, by luckybob

User metadata
Rank l33t
Rank
l33t
Irinikus wrote on 2023-09-05, 17:51:

How is GPU usage calculated?

Smoke, mirrors, and goat sacrafice.

I recall Steve from GN saying that GPU usage is kinda made up. or something to that effect. I would imagine it just asks the gpu to do a trivial task, and records how long it takes to get a reply. The longer the task takes, the more load on the gpu.

thats how i'd program it anyway.

It is a mistake to think you can solve any major problems just with potatoes.

Reply 115 of 243, by Irinikus

User metadata
Rank Oldbie
Rank
Oldbie
luckybob wrote on 2023-09-05, 18:00:
Smoke, mirrors, and goat sacrafice. […]
Show full quote
Irinikus wrote on 2023-09-05, 17:51:

How is GPU usage calculated?

Smoke, mirrors, and goat sacrafice.

I recall Steve from GN saying that GPU usage is kinda made up. or something to that effect. I would imagine it just asks the gpu to do a trivial task, and records how long it takes to get a reply. The longer the task takes, the more load on the gpu.

thats how i'd program it anyway.

Thanks! 😀

Talking about RAM, I may actually be able to get 32GB into this machine!

2wMZJkz.png

YouTube

Reply 116 of 243, by luckybob

User metadata
Rank l33t
Rank
l33t

oh neat!

news to me. I kinda thought DDR2 petered out at 2gb/stick

https://www.ebay.com/itm/124383403699

not expensive! (at least in america)

It is a mistake to think you can solve any major problems just with potatoes.

Reply 117 of 243, by Irinikus

User metadata
Rank Oldbie
Rank
Oldbie
luckybob wrote on 2023-09-05, 19:22:
oh neat! […]
Show full quote

oh neat!

news to me. I kinda thought DDR2 petered out at 2gb/stick

https://www.ebay.com/itm/124383403699

not expensive! (at least in america)

That listing's for DDR2 800 CL6, this machine runs DDR2 400 CL3. If I installed such sticks into the system would they run @ 400 CL3? (You can apparently run DDR2 800 @ 400MHZ, but will the CL remain 6 or will it drop to 3?)

YouTube

Reply 118 of 243, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie

Greetings Irinikus! 😀

Irinikus wrote on 2023-09-05, 14:11:

I'm definitely going to try a GTX Titan Z in this system when the opportunity to purchase one presents itself! (I'll set it up in the same way as I've setup the GTX 690!)

The only thing that bothers me is that the Titan Z is clocked at a lower frequency than the GTX 690, and if you take a look at the GPU usage that I got in Crysis, the Titan Z may actually offer reduced performance in this case!?

As someone who had (and still has in a box somewhere) GTX 690 and upgraded to Titan Black back in the day, I predict the TitanZ to smoke the 690 (Never had Titan Z though).

The Titan Black was defo an upgrade to the 690 (while I do love my 690). In some specialised conditions, the 690 could very well beat a Titan (non black version), but that workload would have to be max utilisation of SLI, minimal VRAM requirement (due to limited unstacked VRAM in 690) and saturate the entire CUDA core capacity (not captilalising on the higher ROP and TMU count of the titan). While the Titan clocks might be lower, the ROP and TMU allow a greater raster performance/throughput and coupling this with the higher CUDA core count, the reduced clock speed becomes less of a decent metric to gauge them against each other as shader execution throughput could still out perform the higher clocked/less core hardware. Not to mention higher bus speed of the Titan too.

The Titan black was a higher clocked Titan and so would be even better at beating the 690... which it was (Titan Black gave noticeably smoother and higher FPS in benchmarks such as "Unigine Heaven" without stutter, than 690 utilising it's SLI).

It's my inderstanding that the TitanZ is more like dual Titan Blacks, than dual Titans (although according to some sources this is a bit grey, and may even be two slightly hindered Titan cores), so I would assume that even 'half' of the TitanZ would match or outperform the 690 in most (if not all) things.

Also your screenshots, I think clearly show a CPU bottleneck (the GPU is waiting on CPU), why though I don't know. iirc Crysis could use max 4 cores, but is probably heavly limited by single core throughput. Can't remember what FPS Crysis gave me with the GTX 690, but it should be a lot higher than 25 😉. I was running it on dual X5690 (X58 system), very playable at 1920 x 1080 (50+ FPS iirc) and even playable at 2560 x 1600 at the time if memory serves.

Irinikus wrote on 2023-09-05, 17:51:
The Serpent Rider wrote on 2023-09-05, 17:42:

It doesn't. Titan has almost twice the amount of CUDA cores.

How is GPU usage calculated?

That is a very good question!

I think use of the word 'usage' on these overlays is a bit of a red-herring for indicating if your GPU is at capacity or not, rather indicates how much the CPU is waiting on the GPU (so 'usage' in this case means two different things for CPU/GPU). CPU usage shows how much the current total hardware is utilised, where as for GPU it actually shows how long the GPU is busy over time, irrelevant if the GPU is at full capacity.

e.g. For dual core, a single threaded program could fully saturate just a single core so the usage is peresented as 50%, yet it is bottlenecking due the to limit of single core performance. Using both cores at max would be technically (and presented as) 100%.

For GPU, it's different, a shader that does not fully saturate the GPU architecture (i.e nV/WARP or AMD/Wavefront) so hardware usage is not 100% could still be bottlenecking as the CPU is having to wait for the GPU to finish, even though only a small percentage of total hardware performance is utilised... but this is presented as '100% usage'.

So in both cases there are bottlenecks, also in both cases neither hardware is at full capacity, but they present different values for 'usage'.

Also note, I think most of these 'overlays' rely on vendor specific telemetry for any detailed information about the GPU usage, and while it probably wouldn't be lying... it's proprietry so we don't exactly know what it is reporting wrt to usage and capacity and how it reaches it's conclusion. GPU architecture doesn't fit all usage scenarios. It's highly mutli-threaded so can only really be saturated with embarrassingly-parrallel problems, and even then, race conditions/concurrency, overall process design wrt to scheduling (driver-side AND client-side that is) and order of operations throws yet more spanners into the works about being able to fully utilise a GPU's full capacity. It's hard to keep GPU's workload at optimum full capacity all the time and is, in most cases, actually impossible since the way the architecture needs to be used.

TL;DR CPU usage shows 'how much is used', GPU usage shows 'how long it takes'...

Reply 119 of 243, by Irinikus

User metadata
Rank Oldbie
Rank
Oldbie
spiroyster wrote on 2023-09-06, 10:17:
Greetings Irinikus! :) […]
Show full quote

Greetings Irinikus! 😀

Irinikus wrote on 2023-09-05, 14:11:

I'm definitely going to try a GTX Titan Z in this system when the opportunity to purchase one presents itself! (I'll set it up in the same way as I've setup the GTX 690!)

The only thing that bothers me is that the Titan Z is clocked at a lower frequency than the GTX 690, and if you take a look at the GPU usage that I got in Crysis, the Titan Z may actually offer reduced performance in this case!?

As someone who had (and still has in a box somewhere) GTX 690 and upgraded to Titan Black back in the day, I predict the TitanZ to smoke the 690 (Never had Titan Z though).

The Titan Black was defo an upgrade to the 690 (while I do love my 690). In some specialised conditions, the 690 could very well beat a Titan (non black version), but that workload would have to be max utilisation of SLI, minimal VRAM requirement (due to limited unstacked VRAM in 690) and saturate the entire CUDA core capacity (not captilalising on the higher ROP and TMU count of the titan). While the Titan clocks might be lower, the ROP and TMU allow a greater raster performance/throughput and coupling this with the higher CUDA core count, the reduced clock speed becomes less of a decent metric to gauge them against each other as shader execution throughput could still out perform the higher clocked/less core hardware. Not to mention higher bus speed of the Titan too.

The Titan black was a higher clocked Titan and so would be even better at beating the 690... which it was (Titan Black gave noticeably smoother and higher FPS in benchmarks such as "Unigine Heaven" without stutter, than 690 utilising it's SLI).

It's my inderstanding that the TitanZ is more like dual Titan Blacks, than dual Titans (although according to some sources this is a bit grey, and may even be two slightly hindered Titan cores), so I would assume that even 'half' of the TitanZ would match or outperform the 690 in most (if not all) things.

Also your screenshots, I think clearly show a CPU bottleneck (the GPU is waiting on CPU), why though I don't know. iirc Crysis could use max 4 cores, but is probably heavly limited by single core throughput. Can't remember what FPS Crysis gave me with the GTX 690, but it should be a lot higher than 25 😉. I was running it on dual X5690 (X58 system), very playable at 1920 x 1080 (50+ FPS iirc) and even playable at 2560 x 1600 at the time if memory serves.

Irinikus wrote on 2023-09-05, 17:51:
The Serpent Rider wrote on 2023-09-05, 17:42:

It doesn't. Titan has almost twice the amount of CUDA cores.

How is GPU usage calculated?

That is a very good question!

I think use of the word 'usage' on these overlays is a bit of a red-herring for indicating if your GPU is at capacity or not, rather indicates how much the CPU is waiting on the GPU (so 'usage' in this case means two different things for CPU/GPU). CPU usage shows how much the current total hardware is utilised, where as for GPU it actually shows how long the GPU is busy over time, irrelevant if the GPU is at full capacity.

e.g. For dual core, a single threaded program could fully saturate just a single core so the usage is peresented as 50%, yet it is bottlenecking due the to limit of single core performance. Using both cores at max would be technically (and presented as) 100%.

For GPU, it's different, a shader that does not fully saturate the GPU architecture (i.e nV/WARP or AMD/Wavefront) so hardware usage is not 100% could still be bottlenecking as the CPU is having to wait for the GPU to finish, even though only a small percentage of total hardware performance is utilised... but this is presented as '100% usage'.

So in both cases there are bottlenecks, also in both cases neither hardware is at full capacity, but they present different values for 'usage'.

Also note, I think most of these 'overlays' rely on vendor specific telemetry for any detailed information about the GPU usage, and while it probably wouldn't be lying... it's proprietry so we don't exactly know what it is reporting wrt to usage and capacity and how it reaches it's conclusion. GPU architecture doesn't fit all usage scenarios. It's highly mutli-threaded so can only really be saturated with embarrassingly-parrallel problems, and even then, race conditions/concurrency, overall process design wrt to scheduling (driver-side AND client-side that is) and order of operations throws yet more spanners into the works about being able to fully utilise a GPU's full capacity. It's hard to keep GPU's workload at optimum full capacity all the time and is, in most cases, actually impossible since the way the architecture needs to be used.

TL;DR CPU usage shows 'how much is used', GPU usage shows 'how long it takes'...

Thanks for the info, it's much appreciated! 😀

It's great to see another member of the SGI crowd here! 😀

YouTube