VOGONS


My 3DMark01 Mega Thread

Topic actions

Reply 260 of 804, by BSA Starfire

User metadata
Rank Oldbie
Rank
Oldbie

file.php?mode=view&id=15350&sid=dadc016ba42e669c79d5fc800dc87838

Pentium 4 2.4B, Radeon HD 4670 1Gb GDDR3(AGP), win xp sp3

Attachments

  • Filename
    radeonhd.JPG
    File size
    823.42 KiB
    Downloads
    No downloads
    File license
    Fair use/fair dealing exception

286 20MHz,1MB RAM,Trident 8900B 1MB, Conner CFA-170A.SB 1350B
386SX 33MHz,ULSI 387,4MB Ram,OAK OTI077 1MB. Seagate ST1144A, MS WSS audio
Amstrad PC 9486i, DX/2 66, 16 MB RAM, Cirrus SVGA,Win 95,SB 16
Cyrix MII 333,128MB,SiS 6326 H0 rev,ESS 1869,Win ME

Reply 261 of 804, by Darkman

User metadata
Rank Oldbie
Rank
Oldbie

decided to try what I would try an overpowered GPU for my now WinXP machine (2000 had some issues with this CD drive, for whatever reason it would not boot with it connected). overpowered because this is an Athlon XP 2800+ , the GPU I got was a Radeon X1950 Pro AGP 512MB , for the price I got it , its great, works and runs fine.

3DMark01 doesn't seem to work though , it did work with the 5900U , but it gives me the following error with the X1950 (Im using Catalyst 7-10)

untitled_zpsf1c502e3.png

going to show how bottlenecked this card is , I ran the Doom3 timedemo with the 5900U at high settings 1024X768 to get 35fps . with the X1950 I get 47fps.
not a big increase until I ramped up the res to 1600X1200 which gave me a score of.....47... although I will say actual gameplay was much better (when in hell , the 5900U was struggling at 25fps 1024X768 , the X1950 was running it at 55-60 at 1600X1200

Reply 262 of 804, by AlphaWing

User metadata
Rank Oldbie
Rank
Oldbie

Amd Athlon 64 x2 3800+ @ 2.4ghz 240mhz FSB Socket 939
Abit Kn8 Ultra Nforce 4
Geforce 550ti, 4gb DDR-400-2.5-3-3-2t
WinXP-Sp3 Forceware 337.88

3dmark Score = 30017

GPU is not a match for this machine 🤣 .
Its in there so I can run Steam OS beta on this machine.

Last edited by AlphaWing on 2015-08-13, 04:51. Edited 1 time in total.

Reply 263 of 804, by AlphaWing

User metadata
Rank Oldbie
Rank
Oldbie
  • Asus Geforce FX 5800
    Nforce2 Gigabyte GA-7n400pro2
    Athlon Xp 3000+ 167mhzFSB L2 512k
    1gb DDR 1:1 3-2-2-6
    SBLIVE! 5.1
    Windows XP SP3
    Forceware 175.19

3dmark Score = 12479

Moved my FX5800 from the 1.4ghz Tualatin machine to this one.
A BIG upgrade for it.

Last edited by AlphaWing on 2015-08-13, 04:51. Edited 1 time in total.

Reply 264 of 804, by Mau1wurf1977

User metadata
Rank l33t++
Rank
l33t++

P4 3.2 GHz Northwood (HT disabled) with a GeForce4 Ti 4200 AGP8x and latest drivers.

JoFlDWf.png

My website with reviews, demos, drivers, tutorials and more...
My YouTube channel

Reply 265 of 804, by AlphaWing

User metadata
Rank Oldbie
Rank
Oldbie

Asus P5VDC-MX - VIA P4m800pro
Pentium 4 D935 3200mhz No HT support.
2gb DDR2 @ 266mhz 5-4-4-11-2T
BFG Geforce 6600 GT AGP
SBLIVE! 5.1
WinXP SP3
Forceware 175.19

3DMARK SCORE = 18428

This is the Via Based LGA 775 P4 I have.
You would think I could of done these tests in 9x, but nope. Can't get past any version of the 9x installer.
This board also accepts DDR1 which is interesting, There is a PCIx1 slot on the board next to the AGP too, lots of combos going on with this mobo.

The Geforce 6 is the replacement for the one 9800pro that, cheap Chinese heatsink fried. It surprisingly does better then the 9800 pro, despite having a 128-bit memory bus and a Pci-E to AGP bridge chip. Since this machine refuses to use 9x, its a perfect spot for it.
Oddly enough if I boot the machine with Dos 6.22 or the unofficial 7.0, and use a Sound card like the YMF-7xx PCI series or Xwave series. I will get full sound in dos. The SBLIVE! also works on this machine in dos. So it has the potential of being a really fast dos machine due to still having DDMA support.

Last edited by AlphaWing on 2015-08-13, 04:52. Edited 1 time in total.

Reply 266 of 804, by BSA Starfire

User metadata
Rank Oldbie
Rank
Oldbie

Here's my VIA based netburst system, board is a MSI 7255 VIA P4M890. 1 gig DDR2, Celeron D 360(3.46Ghz, 533 FSB, 512k cache) Cedar mill D0 stepping. This is with the onboard unichrome video.
file.php?mode=view&id=15455&sid=f543eacc0f534d72b44916b0385012bb

Attachments

  • Filename
    3mmark.JPG
    File size
    215.61 KiB
    Downloads
    No downloads
    File license
    Fair use/fair dealing exception

286 20MHz,1MB RAM,Trident 8900B 1MB, Conner CFA-170A.SB 1350B
386SX 33MHz,ULSI 387,4MB Ram,OAK OTI077 1MB. Seagate ST1144A, MS WSS audio
Amstrad PC 9486i, DX/2 66, 16 MB RAM, Cirrus SVGA,Win 95,SB 16
Cyrix MII 333,128MB,SiS 6326 H0 rev,ESS 1869,Win ME

Reply 267 of 804, by BSA Starfire

User metadata
Rank Oldbie
Rank
Oldbie

file.php?mode=view&id=15470&sid=22ed80aa0808f382b851c2b683875eab

Upgraded the via Celeron D system posted above, now has a core2 E4300 & Geforce 6800, now that's more like it 😀

Attachments

  • Filename
    e43006800.JPG
    File size
    216.04 KiB
    Downloads
    No downloads
    File license
    Fair use/fair dealing exception

286 20MHz,1MB RAM,Trident 8900B 1MB, Conner CFA-170A.SB 1350B
386SX 33MHz,ULSI 387,4MB Ram,OAK OTI077 1MB. Seagate ST1144A, MS WSS audio
Amstrad PC 9486i, DX/2 66, 16 MB RAM, Cirrus SVGA,Win 95,SB 16
Cyrix MII 333,128MB,SiS 6326 H0 rev,ESS 1869,Win ME

Reply 268 of 804, by kithylin

User metadata
Rank l33t
Rank
l33t
BSA Starfire wrote:

<snip>

Upgraded the via Celeron D system posted above, now has a core2 E4300 & Geforce 6800, now that's more like it 😀

It might be a little faster if you turned speedstep off for that chip. I doubt 3dmark 2001se will load the CPU enough to trigger switching up to the upper CPU speeds.

Reply 269 of 804, by BSA Starfire

User metadata
Rank Oldbie
Rank
Oldbie
kithylin wrote:
BSA Starfire wrote:

<snip>

Upgraded the via Celeron D system posted above, now has a core2 E4300 & Geforce 6800, now that's more like it 😀

It might be a little faster if you turned speedstep off for that chip. I doubt 3dmark 2001se will load the CPU enough to trigger switching up to the upper CPU speeds.

Interesting thanks, never thought of that, I'll give it a try 😀

286 20MHz,1MB RAM,Trident 8900B 1MB, Conner CFA-170A.SB 1350B
386SX 33MHz,ULSI 387,4MB Ram,OAK OTI077 1MB. Seagate ST1144A, MS WSS audio
Amstrad PC 9486i, DX/2 66, 16 MB RAM, Cirrus SVGA,Win 95,SB 16
Cyrix MII 333,128MB,SiS 6326 H0 rev,ESS 1869,Win ME

Reply 270 of 804, by kithylin

User metadata
Rank l33t
Rank
l33t
BSA Starfire wrote:

Interesting thanks, never thought of that, I'll give it a try 😀

A little tip for you. If you select "Always On" power profile in power settings for XP, it should temporarily make the cpu run at max multiplier all the time (sort of disable speedstep) and just experience it and see what you think. And if you like it, then you can turn it off in bios.

Reply 271 of 804, by obobskivich

User metadata
Rank l33t
Rank
l33t

In general I wouldn't turn things like EIST off except for benchmarking - they contribute to cooler-running and lower power-consuming systems. But kithylin is right that for 3D01 or what-have-you disabling the feature will probably improve the score a little bit. If you have a second monitor you can watch CPU clocks in CPU-Z while running 3D01 to see if it's running at full-speed or not.

Reply 272 of 804, by kithylin

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:

In general I wouldn't turn things like EIST off except for benchmarking - they contribute to cooler-running and lower power-consuming systems.

EIST is thermal protection and is totally different from speedstep. EIST should never be touched and left on at all times.

All speedstep does is change the multiplier up/down based on load. All intel chips are designed to run at max speedstep all the time on the stock cooler designed for them with no ill effects. Imagine if someone was doing an animation render job and left the computer on 24-7 for a few months, it would be at max speedstep (and max load) that entire time. The chips are designed to do this and handle it as part of the stock heatsink design and thermal design. They may run -slightly- hotter and use a smidge more power but if they're not overclocked, the difference is negligible and the performance gains are huge. In general anyone that owns an Intel CPU not running it at max speed is missing out on what their chip is capable of, this goes for older core2duo chips and even the modern ones. And besides that, the maximum temperature these chips can handle is 95c before shutting off. It's almost impossible to hit that on an air cooler with a stock (not-overclocked) chip, even with the stock cooler.

Reply 273 of 804, by obobskivich

User metadata
Rank l33t
Rank
l33t
kithylin wrote:

EIST is thermal protection and is totally different from speedstep. EIST should never be touched and left on at all times.

EIST = Enhanced Intel Speedstep Technology. It's the same thing as speedstep - it's power conservation and throttling first introduced with Pentium III, while EIST was first introduced with Pentium M (and technically modern processors have "EEIST" - Enhanced EIST, but that's a mouthful 😵), which not only adjusts frequency in a more granular manner than previous hardware, but an also adjust cache availability and other features to further improve thermals and power draw in response to load.

All speedstep does is change the multiplier up/down based on load.

Not entirely. Depending on the chip it can adjust cache availability, FSB, vCore, etc to reduce thermals and power consumption when the CPU is not heavily loaded (which is determined based on performance state). The original implementation was designed to improve battery life in mobile computers, but it's made it's way into desktop systems to reduce heat and power draw when the machine isn't doing particularly much (and there's nothing wrong with that EXCEPT when you're benchmarking). If your GPU has adaptive clocking that should be defeated if possible for benchmarking as well, but for normal usage these features can (and should) be left in their default-on positions (on many GPUs defeating the adaptive clocking will require the fan to run in a noticeably louder manner all the time).

To give an example from my modern system that has EIST and nVidia GPU Boost, full-clocks for the CPU are 2.8GHz, and the GPU at 1.1GHz, however even in relatively modern games (like Skyrim) it isn't uncommon to see the CPU at 2GHz and GPU at 700-900MHz; on loading-screens and during cut-scenes it's not uncommon to see the GPU drop all the way down to 324MHz (where it idles on the Aero desktop). None of this matters for the game (gameplay is perfectly smooth), but for benchmarking this will usually drop scores because "light load" areas will see the power management kick-in.

All intel chips are designed to run at max speedstep all the time on the stock cooler designed for them with no ill effects.

In theory, in ideal conditions, yadda yadda. There's still nothing bad about running cooler and using less power when the machine doesn't need to be at 100%. 😀

I'm not trying to argue the whole "my CPU needs to be at absolute zero" point here - as long as you're under the envelope maximum it isn't generally a problem, but nothing is HARMED by running cooler where possible, and the power savings while not dramatic on a per-user basis, become meaningful when spread out across the millions of machines deployed.

Imagine if someone was doing an animation render job and left the computer on 24-7 for a few months, it would be at max speedstep (and max load) that entire time.

This is a minority usage scenario, and render farms are generally designed and built to handle 24x7 loading in terms of heat and power handling. They're usually fairly loud setups as a result. At-home systems is another story altogether, and standard parts aren't generally designed with 100% 24x7 duty cycle in mind (and you can find plenty of stories about F@H, SETI, coin-mining, etc killing hardware prematurely due to heat).

They may run -slightly- hotter and use a smidge more power but if they're not overclocked, the difference is negligible and the performance gains are huge. In general anyone that owns an Intel CPU not running it at max speed is missing out on what their chip is capable of, this goes for older core2duo chips and even the modern ones.

Not true. The difference in power draw and temperatures can be fairly dramatic on the newest hardware, and the performance gains are not noticed outside of benchmarks - if the system is being loaded enough to need "full power" it will get "full power" (and on chips with Turbo they'll even go beyond that as long as it's within their thermal envelope). What it will do as a benefit is let the thing run cooler, quieter, and lower-power-draw when it's just sitting on the desktop or checking email or doing any of a number of other tasks where a modern CPU only sees a few % load. There is no good reason to waste power, run hotter, etc in those situations. Even if the cooling solution is capable of dealing with it.

To give you a modern (and somewhat extreme) example, look at nVidia Kepler GPUs with GPU Boost:
http://www.techpowerup.com/reviews/NVIDIA/GeF … X_Titan/25.html
http://www.techpowerup.com/reviews/NVIDIA/GeF … X_Titan/34.html

CPUs generally aren't quite so dramatized, but 20-40% power savings aren't unreasonable with EIST (or AMD's equivalent), along with the associated reduction in heat output and (potentially) noise. 😀

And besides that, the maximum temperature these chips can handle is 95c before shutting off. It's almost impossible to hit that on an air cooler with a stock (not-overclocked) chip, even with the stock cooler.

You've obviously never had to clean 3-4 years of cat hair out of grandma's old Dell Inspiron... 🤣

Also, for the processor originally mentioned (Core 2 Duo E4300), TCase is 61.4* C per Intel specifications (http://ark.intel.com/products/28024/Intel-Cor … 2%20Duo%20E4300).

Some other random processors I looked up on ARK:

Core i7 920: 67.9* C
Core i7 4770k: 72.7* C
Core 2 Extreme QX9770: 55.5* C
Pentium G3258: 72* C

Can you point out a specific example of a chip that has TCase at 95* C or higher? Or that is documented running continuously at 95* C 24x7 in a 100% loading scenario and not failing?

Again - I'm not disagreeing with you on defeating power management features on a desktop for benchmarking; it can have an impact on the resulting scores*, however for normal use (including for gaming) those features harm nothing by being left on, and will generally reduce heat, noise, and power draw by doing their job.

* As far as the "why" for those interested: generally benchmarks like 3DMark, AquaMark, Uniengine, etc attempt to emulate real-world gaming loads so they will have variable complexity throughout the run of the suite, and the load on the CPU/GPU will accordingly go up and down in response. With power management enabled the low loading sequences will see the CPU/GPU drop into lower power states, and while this will still produce playable frame-rates, you won't get the absolute highest frame-rates possible during those sequences, and in many cases it's those "high climbs" that can beef up a score (they run up the overall FPS average for a test, the entire run, etc). In gaming the same effect happens, however in gaming we generally don't care about achieving the highest possible FPS, only a playable frame-rate (whatever that's defined as for your specific application), so anything extra is irrelevant, whereas in a benchmark that "extra" can mean a higher score.

For those that are curious, open-up CPU-Z and/or GPU-Z and watch their clock, vcore, etc readouts while you run a benchmark or a game to see this effect in action (ofc you have to have hardware/software that supports power management, and has it enabled).

Reply 274 of 804, by kithylin

User metadata
Rank l33t
Rank
l33t
obobskivich wrote:

Not true. The difference in power draw and temperatures can be fairly dramatic on the newest hardware, and the performance gains are not noticed outside of benchmarks

obobskivich wrote:

CPUs generally aren't quite so dramatized, but 20-40% power savings aren't unreasonable with EIST (or AMD's equivalent), along with the associated reduction in heat output and (potentially) noise. 😀

A local friend of mine has a modern I7-2600 (not K series) chip with the factory Intel cooler. We recently turned his speedstep off and let it run at max speed all the time. I measured it at the outlet and under 100% load situations it only draws an additional 10 watts, CoreTemp64 said it only ran +5c hotter, and it shaved off a whole hour shorter off his adobe animation render times. Previously it would take him 5 hours (Default situation, speedstep on, turbo on), and cut it down to 4 hours with speedstep off. It wouldn't hit max turbo speed even under 100% load for 5 hours. It actually does have a real-world impact, and it's quite beneficial, even on modern hardware. It causes marginal heat load and with a modern high efficiency power supply it barely causes any difference in load. Also the modern intel stock coolers can't be heard over the case fans even at 100% fan speed.

Another example is my gateway laptop with a 1.6 ghz Pentium-D dual core chip. With speedstep on it can't run youtube @ 1080p full-screen. It just switches to full screen and freezes and eventually the browsers crash. But with speedstep off, it plays 1080p clips full screen smoothly with no skipping. Another real-world, very beneficial effect.

I've done extensive testing and multiple 48 and 72 hour burn tests on i7-4770 systems, 3600 systems, and 2600 systems, all with "intel burn test" (which is really brutal and simulates worst-case-scenario) and speedstep doesn't effect anything what so ever, other than making things faster. I'm not just making this stuff up, I've done extensive testing with a wide range of platforms both new and old, desktop and mobile.

obobskivich wrote:
Also, for the processor originally mentioned (Core 2 Duo E4300), TCase is 61.4* C per Intel specifications (http://ark.intel.com […]
Show full quote

Also, for the processor originally mentioned (Core 2 Duo E4300), TCase is 61.4* C per Intel specifications (http://ark.intel.com/products/28024/Intel-Cor … 2%20Duo%20E4300).

Some other random processors I looked up on ARK:

Core i7 920: 67.9* C
Core i7 4770k: 72.7* C
Core 2 Extreme QX9770: 55.5* C
Pentium G3258: 72* C

Can you point out a specific example of a chip that has TCase at 95* C or higher? Or that is documented running continuously at 95* C 24x7 in a 100% loading scenario and not failing?

I'm guessing you haven't delved in to the realm of extreme overclocking on these chips and pushed them past 4ghz to figure out where the "Real Limits" are. Those stated Intel temps are a general guideline. What I was referring to is the built in thermal protection inside the CPU. Nothing happens on these chips (45nm and newer Intel CPU's I'm referring to, the older 65nm chips don't have protection) when you run them up near 85c-90c, it isn't until you actually hit 95c that they shut the system off and save it, and they don't sustain physical damage until you get past 100c. Even with all failsafes disabled in bios they still do this. The modern sandy-bridge, ivy bridge and haswell chips actually will throttle the cpu back when it reaches near 95c instead of a full shut off. Anything else below that is 'all gravy' and has no issues. Intel chips can handle a lot more than most people expect. They're not these fragile little things you have to baby all day.

Reply 277 of 804, by BSA Starfire

User metadata
Rank Oldbie
Rank
Oldbie

I tied the core 2 again with speedstep disabled, was in a few points of with, so not going to worry about posting the pic. Here is a more interesting oddball machine for your purusal though. AMD Semperon 2800+, ASUS A7V400-MX(VIA KM 400), 1Gig DDR(166mhz) & a ECS AG315-64 SIS 315E 64mb AGP card. Dirt cheap system at the time, but it's quite quick and very stable, few quirks on the benchmark, bump mapping looked horrid and point sprites it didn't bother to draw half the pixels, but otherwise alright. Fun to play with these more obscure accellerators. 😀
file.php?mode=view&id=15485&sid=03a8a7eac2429feda8e904116587b4e8

Attachments

  • Filename
    sis315.JPG
    File size
    189.12 KiB
    Downloads
    No downloads
    File license
    Fair use/fair dealing exception

286 20MHz,1MB RAM,Trident 8900B 1MB, Conner CFA-170A.SB 1350B
386SX 33MHz,ULSI 387,4MB Ram,OAK OTI077 1MB. Seagate ST1144A, MS WSS audio
Amstrad PC 9486i, DX/2 66, 16 MB RAM, Cirrus SVGA,Win 95,SB 16
Cyrix MII 333,128MB,SiS 6326 H0 rev,ESS 1869,Win ME

Reply 278 of 804, by rick6

User metadata
Rank Member
Rank
Member

I was given a few computers that were heading towards the recycling bin, being one of them a pentium 4 3Ghz sk775 on a asrock sk775 AGP board and a geforce2 mx400 AGP (odd mix huh). I might try to resurrect the board full of bulged caps around the cpu area, throw in a Radeon 3850 AGP and also one of those infamous Pentium 4 3.6Ghz prescHOTt cpus in there and see what it scores!

My 2001 gaming beast in all it's "Pentium 4 Williamate" Glory!

Reply 279 of 804, by GeorgeMan

User metadata
Rank Oldbie
Rank
Oldbie

Just run it for fun on my main PC.
Settings: CPU Core i5 2500K @ stock, undervolted.
Radeon HD7870XT Tahiti LE core, with boost, 2GB GDDR5 5GHz
8GB RAM 2133MHz dual channel

prnt_zpsfcf81e90.jpg

Edit:
And one with the same settings, except the CPU running at 4.5GHz

prnt_zps88d845ba.jpg

Core i7-13700 | 32G DDR4 | Biostar B760M | Nvidia RTX 3060 | 32" AOC 75Hz IPS + 17" DEC CRT 1024x768 @ 85Hz
Win11 + Virtualization => Emudeck @consoles | pcem @DOS~Win95 | Virtualbox @Win98SE & softGPU | VMware @2K&XP | ΕΧΟDΟS