VOGONS


New Stuff :D

Topic actions

  • This topic is locked. You cannot reply or edit posts.

Reply 20 of 29, by archsan

User metadata
Rank Oldbie
Rank
Oldbie

Well, billions of people on earth don't have "i7 with 4 GFX cards" for whatever reasons. Pointless and a total waste of LIFE. SHAME ON THEM!!!

Hehe, can't help it. 😁

P.S. that was half sarcasm + half joke, please don't take it seriously -- besides, I only got one GTX 470 now in my i7 rig, honest!

Last edited by archsan on 2012-11-18, 15:36. Edited 1 time in total.

Reply 21 of 29, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie

Well, it depends on who you ask for.. I actually have four GTX680s with a Maximus V Extreme and a 3770k OCed to 5ghz and it has let me play games @ 2560x1600 with insane settings at incredible frame rates (For example: Crysis 1 with Texture mods and custom CFGs plus extra shaders - 60 fps; BF3 64 players Ultra high- more than 120 fps without dropdowns), things that were thought to be impossible for me two years ago..

It is true that most games nowadays are designed with consoles in mind. However, heavy mods and high settings of antialiasing are what make multi graphics card systems shine 😁

Edit: don't forget multi monitor gaming!

Reply 22 of 29, by archsan

User metadata
Rank Oldbie
Rank
Oldbie
subhuman@xgtx wrote:

Well, it depends on who you ask for.. I actually have four GTX680s with a Maximus V Extreme and a 3770k OCed to 5ghz and it has let me play games @ 2560x1600 with insane settings at incredible frame rates (For example: Crysis 1 with Texture mods and custom CFGs plus extra shaders - 60 fps; BF3 64 players Ultra high- more than 120 fps without dropdowns), things that were thought to be impossible for me two years ago..

Do you have the 2GB or the 4GB version of those GTX 680s? I was going to consider 3GB 580s and then 4GB 680s now that Octane Render has improved support for Kepler. Also such setups are great for realtime archviz on CryEngine or UnrealEngine.

The GK110 would be ideal but it seems to me now that NVIDIA is starting to protect their pro market (i.e. Tesla & Quadro ) by not releasing their top end chip in the gaming segment of the market. I might be wrong though... It also depends on the other side (cough AMD) to release something that would force them to compete back.

Reply 23 of 29, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie

I have the 2gb versions of the 680. Based on experience If you are gaming just at 2560x1600 then it's fine for even the most demanding games, but I don't know if it just happens the same with work apps

Reply 24 of 29, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie

For what I have seen the Geforce gtx680 has even less fp64 performance than the gtx580 (actually 1/24 compared to its own fp32 perf). NV appearently made this to defend their super expensive Quadro brand for making the user buy these instead of the much cheaper, gaming oriented Geforce series cards

Reply 25 of 29, by sunaiac

User metadata
Rank Oldbie
Rank
Oldbie
archsan wrote:

The GK110 would be ideal but it seems to me now that NVIDIA is starting to protect their pro market (i.e. Tesla & Quadro ) by not releasing their top end chip in the gaming segment of the market. I might be wrong though... It also depends on the other side (cough AMD) to release something that would force them to compete back.

GK110 in its current form is twice the transistor count of the GK104, 30% less frequency, difficultly 10 % more 32 bits performances for 15% more energy at 225W. Yes, it has 8 times the 64 bit performance. Who cares for gaming ? I don't see how it's close to "ideal" for anything but e-Pen in supercomputer top 500.

IMG0039402.gif

The enemy of the GK110 has never been the 7970, but the GK104 itself. nVidia can sell the 10% slower GTX680 10% more expensive than the GHz 7970, because it's written nVidia on the box (current French prices, dunno for other places). When the GK104 came at 1GHz+ with a 680 branding, nvidia said that's because the 7970 didn't deliver. Well that might be true, but the GK110 was also kinda destroyed by the fact they could clock the GK104 much higher than they thought.

On the other hand, the GK104 is for once not so castrated compared to the quadro. What makes it very efficient is the fact it's purely a gaming CPU, unlike the GTX580 and the 7970. It was a smart move, the same AMD did with the HD4870 before.
This way they can make more money, pay more developers to slow the game down on Radeon, get more buyers, so more money, etc... until AMD dies and we pay crappy cards like GTX680 1000€ a piece 😀

PS : I own a GTX580, i can recognize a good card as well as a rip off.

R9 3900X/X470 Taichi/32GB 3600CL15/5700XT AE/Marantz PM7005
i7 980X/R9 290X/X-Fi titanium | FX-57/X1950XTX/Audigy 2ZS
Athlon 1000T Slot A/GeForce 3/AWE64G | K5 PR 200/ET6000/AWE32
Ppro 200 1M/Voodoo 3 2000/AWE 32 | iDX4 100/S3 864 VLB/SB16

Reply 26 of 29, by archsan

User metadata
Rank Oldbie
Rank
Oldbie

As the thread's been derailed anyway, I'll try to steer it into something more informative 😀

subhuman@xgtx wrote:

I have the 2gb versions of the 680. Based on experience If you are gaming just at 2560x1600 then it's fine for even the most demanding games, but I don't know if it just happens the same with work apps

Games are not currently rendered using physically-unbiased renderer and are basically very low polygon compared to what you see in many of those awesome pre-rendered photorealistic images/animations today. Of course, the more memory we've got, the more polys/detail we can put into our scene. Please take a look at some of the latest demo samples from Octane: http://render.otoy.com/forum/viewtopic.php?f=7&t=25051

Especially the car video, even at just 720p you can already see the DIFFERENCE in the quality of lighting compared to current games:
http://www.youtube.com/watch?&v=m-0T5EaMGmg

And this is the infancy of the future of gaming graphics... Real-time path tracing demo:
http://www.youtube.com/watch?&v=FJLy-ci-RyY
(please excuse the low res, imagine the hundreds of current GTX's needed to run a flawless demo at 2560x1600)

the maker's blog, if you're interested:
http://raytracey.blogspot.com/

So this is how I got spoiled and can't help but think "meh" when I see the graphics of today's most demanding games (Doom3 with Sikkmod included) -- in the department of photorealism, that is. Mostly because of the lighting quality that just never feels right. Of course I gave up waiting since knowing the difference between raster/shader tricks and truly unbiased raytracing.

That's why it's also kind of meh to be flaming/demeaning/namecalling over something that'll be obsolete in a decade or two. NO NEED.

If you have it, just enjoy it like subhuman (apparently) does 😁 I mean holy kid! Just watching a case full of beefy graphics boards is already awesome! Who would've imagined it in 1995?

C'mon, Iris and Squall, let's go back in friendly mode -- retro is more fun that way. 😀

Reply 27 of 29, by archsan

User metadata
Rank Oldbie
Rank
Oldbie
subhuman@xgtx wrote:

For what I have seen the Geforce gtx680 has even less fp64 performance than the gtx580 (actually 1/24 compared to its own fp32 perf). NV appearently made this to defend their super expensive Quadro brand for making the user buy these instead of the much cheaper, gaming oriented Geforce series cards

Yes, that's what pulled me off from the GTX 680. For months now, the Octane team have not recommended the Kepler line because of the poor performance. But it could also be due to the different architecture. The good thing is that there are a lot of GTX 670/680 cards with 4GB option, a step-up from GTX 570's 2.5GB and 580's 3GB options. Tesla stays at 6GB top as with previous top Fermi-based Tesla/Quadro (there was rumor of it having up to 24GB though).

sunaiac wrote:

GK110 in its current form is twice the transistor count of the GK104, 30% less frequency, difficultly 10 % more 32 bits performances for 15% more energy at 225W. Yes, it has 8 times the 64 bit performance. Who cares for gaming ? I don't see how it's close to "ideal" for anything but e-Pen in supercomputer top 500.

Not for gaming. I said stuff in previous page if you care, but the gist is that it'll be good for Octane Render (the thing that I care about 😀), which is CUDA only for now. As it's been released, it's only a matter of time 'til someone come up with rendering bench result of the new Tesla K20/x . Most users are using the GTX line though as it is much more cost-effective. That's why it becomes more important to have a top-chip GTX to this niche market (along with other small-business number-crunchers) than to the gamer market.

Reply 28 of 29, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie

wow! yes, obviously raytracing is the future of graphics rendering. Lightning looks so fake today..

Would integrating an external dedicated chip for this task on newer generations graphics cards (a la transform & lightning like as back in the days- ) actually make a difference?

Reply 29 of 29, by cdoublejj

User metadata
Rank Oldbie
Rank
Oldbie

Raytracing absolutely murders hardware, it's really demanding. A few years ago Carmack tried/started another raytracing experiment i think it just managed 30 fps or maybe it had decent fps but, was a rather small/limited scene.