VOGONS


First post, by GigAHerZ

User metadata
Rank Oldbie
Rank
Oldbie

Hi all,

I've been using Phil's benchmarks set to test stability of my DOS machines. But it is a hit-or-miss thing.
The closest stability test i've seen from that set is DOOM that starts to move into wrong directions. As if keyboard controls get out of sync from what is happening on the screen. The benchmark still completes, but visually you can clearly see something's not right.
... and even when doom finishes properly, sometimes with some later activities you still find the machine to be slightly unstable.

So i would like to find a tool, that could reliably tell me, if something's off.
I have a great 486 @ 3x50MHz and i want to get as tight settings as possible. But at the same time, i want the machine to be rock-solid. Once i set it up, i want to be sure that i don't stumble accidentally on instability later on.

What would be the best software to test stability? That would stress the CPU and all levels of cache & memory enough so it would reliably and quickly show up an issue, if one exists.

"640K ought to be enough for anybody." - And i intend to get every last bit out of it even after loading every damn driver!

Reply 1 of 38, by Disruptor

User metadata
Rank Oldbie
Rank
Oldbie

Doom & Quake from Dosbench suite
Playing 3 missions Raptor (Call of the Shadows) with sound
Installing Windows NT4/2000
Starting Linux may give surprising stops with tightest cache timings, because it may use different patterns to access memory and caches.

But this is not a fast way.
This are some examples to test whether your 3x50 timings are rock-stable or not.

Reply 2 of 38, by bakemono

User metadata
Rank Oldbie
Rank
Oldbie

My method: use PKZIP to zip a ton of files, and then extract the zip and see if any CRC errors result. Tests CPU, memory, and disk. It'll stress memory more if you make a big SMARTDRV cache first.

again another retro game on itch: https://90soft90.itch.io/shmup-salad

Reply 3 of 38, by GigAHerZ

User metadata
Rank Oldbie
Rank
Oldbie
bakemono wrote on 2023-10-27, 11:19:

My method: use PKZIP to zip a ton of files, and then extract the zip and see if any CRC errors result. Tests CPU, memory, and disk. It'll stress memory more if you make a big SMARTDRV cache first.

I could even make a XMS RAMDISK to keep the CPU feed with data.
But is that method now actually good at detecting almost stable machine's issues? Do you have such experience or are you just suggesting?

"640K ought to be enough for anybody." - And i intend to get every last bit out of it even after loading every damn driver!

Reply 4 of 38, by mkarcher

User metadata
Rank l33t
Rank
l33t

I got quite a good success rate with booting a linux live system (might be something as old as Toms Root/Boot) from floppy or CD (you can you the "smart boot manager" to boot CDs in systems without CD support), and then repeatedly running

dd if=/dev/hda bs=1024k count=<count of megs RAM + 3> | md5sum

As long as no hard drive partitions are mounted, it should output the same number every time if the system is stable.

In newer linux kernels, you might need to replace /dev/hda by /dev/sda, but live systems including kernels like that likely don't like the usual RAM amount installed in 486 computers. This calculates a cryptographically secure checksum (which admittedly is overkill, and makes the process unnecessarily slow on 486 processors) from the start of the hard drive. It reads as many megabytes as specified in the "count=" parameter. Using more than the RAM size forces each invocation of that command to re-read the data from disk, because the RAM is insufficient to cache the data. In my experience, this is a very sensitive test for instabilities, because this involves RAM access, cache access and bus access to the hard drive controller. If I remember correctly, this test was the most sensitive one when I tested cache timings for my homebrew 1MB cache module for my Biostar MB-8433UUD-A board. The results were disappointing, because the supposed 10ns SOJ cache chips I got from China turned out to be 15ns chips (according to timings taken with my digital scope).

Reply 5 of 38, by jakethompson1

User metadata
Rank Oldbie
Rank
Oldbie
mkarcher wrote on 2023-10-27, 18:35:

I got quite a good success rate with booting a linux live system (might be something as old as Toms Root/Boot) from floppy or CD (you can you the "smart boot manager" to boot CDs in systems without CD support), and then repeatedly running

And even more so a kernel compile is a stress-test beyond anything this 386/486 hardware might have ever seen... period-correct FAQ here: https://tldp.org/FAQ/sig11/html/index.html

Reply 6 of 38, by mkarcher

User metadata
Rank l33t
Rank
l33t
jakethompson1 wrote on 2023-10-27, 19:05:

And even more so a kernel compile is a stress-test beyond anything this 386/486 hardware might have ever seen... period-correct FAQ here: https://tldp.org/FAQ/sig11/html/index.html

Yes, indeed. But at that point, we are getting severely off-topic. Now it's neither in DOS nor fast. But it is an extremely good stress-test indeed. A convoluted bug in the K6 was discovered due to kernel compile crashes! See https://web.archive.org/web/20001019084755/ht … ug_faq.html#qa7 for some background information and documentation quotes.

Reply 7 of 38, by DrSwizz

User metadata
Rank Newbie
Rank
Newbie
jakethompson1 wrote on 2023-10-27, 19:05:
mkarcher wrote on 2023-10-27, 18:35:

I got quite a good success rate with booting a linux live system (might be something as old as Toms Root/Boot) from floppy or CD (you can you the "smart boot manager" to boot CDs in systems without CD support), and then repeatedly running

And even more so a kernel compile is a stress-test beyond anything this 386/486 hardware might have ever seen... period-correct FAQ here: https://tldp.org/FAQ/sig11/html/index.html

I used looping compiling of the linux kernel as stress test for several years until I heard of Prime95, which was better. And speaking of Prime95; Given its name, surely there must be some ancient version of it that runs on Windows 95?

Edit:
Prime95 for Windows 3.1 and later:
https://web.archive.org/web/20000520005516/ht … rg/freesoft.htm
I have not tried these, so I am not sure how useful these are for actually stress testing.

Reply 9 of 38, by CoffeeOne

User metadata
Rank Oldbie
Rank
Oldbie
GigAHerZ wrote on 2023-10-27, 06:42:
Hi all, […]
Show full quote

Hi all,

I've been using Phil's benchmarks set to test stability of my DOS machines. But it is a hit-or-miss thing.
The closest stability test i've seen from that set is DOOM that starts to move into wrong directions. As if keyboard controls get out of sync from what is happening on the screen. The benchmark still completes, but visually you can clearly see something's not right.
... and even when doom finishes properly, sometimes with some later activities you still find the machine to be slightly unstable.

So i would like to find a tool, that could reliably tell me, if something's off.
I have a great 486 @ 3x50MHz and i want to get as tight settings as possible. But at the same time, i want the machine to be rock-solid. Once i set it up, i want to be sure that i don't stumble accidentally on instability later on.

What would be the best software to test stability? That would stress the CPU and all levels of cache & memory enough so it would reliably and quickly show up an issue, if one exists.

I personally really like the installation of Windows 98SE. When you optimise a 486 that doom and quake are stable (using very aggressive memory timings or overclocking), you will still fail on the installation of Win98.

Reply 10 of 38, by tauro

User metadata
Rank Member
Rank
Member
mkarcher wrote on 2023-10-27, 18:35:
I got quite a good success rate with booting a linux live system (might be something as old as Toms Root/Boot) from floppy or CD […]
Show full quote

I got quite a good success rate with booting a linux live system (might be something as old as Toms Root/Boot) from floppy or CD (you can you the "smart boot manager" to boot CDs in systems without CD support), and then repeatedly running

dd if=/dev/hda bs=1024k count=<count of megs RAM + 3> | md5sum

Pretty neat.
The only downside to your approach is that booting into a live distro on a 486 takes some minutes. What distro(s) do you recommend? DSL?
What about using unixutils' or gnuish's dd and FreeDOS' md5sum on pure DOS (which boots almost instantly)?

Reply 11 of 38, by debs3759

User metadata
Rank Oldbie
Rank
Oldbie

Just downloaded Toms Root/Boot, looks like it might come in handy. Shame that by default it needs a running Linux system in order to be installed to floppy. Will have to see if fdformat for a DOS/Windows system creates a suitable formatted floppy, and what DOS or Windows app can install it to floppy.

See my graphics card database at www.gpuzoo.com
Constantly being worked on. Feel free to message me with any corrections or details of cards you would like me to research and add.

Reply 12 of 38, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

Firstly, there's no such thing as a fast stability test, everything has to heatsoak up to maximum operating temperatures. If you just want something that lies to make you feel good, pasting a line like this

PRINT "Congratulations your system is 110% stable, the stablest stable of all stables, you are also very clever and handsome."

Into your favored compiler, qbasic, turbo whatever, and compiling it to an executable gives you a program that is 99% as accurate as anything that tells you your system is stable in less than 2 hours.

Secondly, the best stability test for everything from 386 until PII is Doom, period.

It is also "an opinion" to consult for PII to P4 class, but it is not 100% on faster systems as they have areas to be tweaky that Doom's code gets nowhere near. Quake is not a good stability test. I mean it is if all you want to do is play Quake, but you can get systems set up to play Quake at highest frame rates and refuse to boot windows and crash in many other games. Obviously if you can't even get the system to run Quake without crashing, it's not stable in the slightest.

So stability testing with Doom, first, run it as the normal benchmark, though not Phil method, edge to edge screen and HUD bar, high res. If it crashes out before completion, super unstable. If it completes but scores (thandor.net etc) are not "close" to similar systems but like twice as fast, it short circuited the demo, watch it next time, it probably goes out of synch. Also super unstable... completes with a score in the ballpark of expected. Good, first hurdle, not that unstable. Next, put the demo on loop for 2 hours, this is the heatsoak test, if it doesn't crash or go out of sync in 2 hours, you can now call it fairly stable. This might be good enough for you.

If you still have weird win 3.11 errors and odd crashes in other games, you might want to leave demo looping overnight, if it does 24 hours, no synch loss, running perfect at the end, you can be fairly sure it's not hardware related as far as the subset Doom uses goes, it could be related to conflicts with other cards and features unused by Doom still but your integer/cache/lower 8MB are all behaving. In case of windows errors at this point, you can probably start to investigate software and other causes, like known bugs.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 13 of 38, by rasz_pl

User metadata
Rank l33t
Rank
l33t

I find Quake to be a good indicator of stability, heavily taxes integer/fpu/video subsystems. Doom will not touch fpu and does slow video transfers (single byte).

Open Source AT&T Globalyst/NCR/FIC 486-GAC-2 proprietary Cache Module reproduction

Reply 14 of 38, by Rav

User metadata
Rank Member
Rank
Member

These tests are very subjective.

For example, if I set my L2 write to 0WS with a FSB of 40Mhz
I can run Doom for many hours...
I can play Daggerfall for the whole evening
Some rare time it does crash but..

If I turn on Windows 95, then Mirc... Wait 5 minutes... Then open "My computer".. It instacrash, almost every single time I tested.
Mirc and windows when idle have basically no load...

For a while I did actually have my DOS autoexec to set the cache write to 0WS while my Windows one, 1WS because "It only crash on Windows" 😂

Reply 15 of 38, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

If that has a Pheonix derived BIOS there were many OEM systems afflicted with a "My computer crashes" bug in the BIOS code.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 16 of 38, by Rav

User metadata
Rank Member
Rank
Member
BitWrangler wrote on 2023-10-28, 18:21:

If that has a Pheonix derived BIOS there were many OEM systems afflicted with a "My computer crashes" bug in the BIOS code.

Seriously? Did not know about that one

But it's an Acer with some very limited Acer BIOS.
And it does not crash with L2 Write set to 1WS.

Reply 18 of 38, by CoffeeOne

User metadata
Rank Oldbie
Rank
Oldbie
GigAHerZ wrote on 2023-10-27, 06:42:
Hi all, […]
Show full quote

Hi all,

I've been using Phil's benchmarks set to test stability of my DOS machines. But it is a hit-or-miss thing.
The closest stability test i've seen from that set is DOOM that starts to move into wrong directions. As if keyboard controls get out of sync from what is happening on the screen. The benchmark still completes, but visually you can clearly see something's not right.
... and even when doom finishes properly, sometimes with some later activities you still find the machine to be slightly unstable.

So i would like to find a tool, that could reliably tell me, if something's off.
I have a great 486 @ 3x50MHz and i want to get as tight settings as possible. But at the same time, i want the machine to be rock-solid. Once i set it up, i want to be sure that i don't stumble accidentally on instability later on.

What would be the best software to test stability? That would stress the CPU and all levels of cache & memory enough so it would reliably and quickly show up an issue, if one exists.

It is slightly off-topic, but I am interested in this 486 150MHz versus 160MHz topic.
Some members prefer 150MHz over 160MHz, because they claim to have better performance, this is not the case for me (made a lot of tests on Asus SV2GX4).
So as far as I know you are one of the "150MHz guys". For me 160MHz always was better, I wonder why?

Reply 19 of 38, by Disruptor

User metadata
Rank Oldbie
Rank
Oldbie

That depends on the RAM and cache timings you can achieve.
If you have a board that may achieve 2-1-1-1 cache timing and good RAM timinigs at 50 MHz, do it. But first you have to find such a board.
It will be more likely you'll find a board that will run that settings at 40 MHz.

But, to be honest, stability check is not easy, as you may read in this topic.