VOGONS


Cyrix appreciation thread

Topic actions

Reply 60 of 391, by feipoa

User metadata
Rank l33t++
Rank
l33t++
DonutKing wrote:

So I guess that's myth busted then :)

You really need a instantaneous discharge to ground from a capacitative source of positive charge (the human body) for this to work out well. How much charge your body has stored, the distance of the CPU to ground, and through which materials of conduction (from CPU to ground) are all responsible for the successful outcome of this experiment. The CPU should be fairly grounded to receive the maximum about of current discharged from your body (the capacitor). It is the momentary surge in current which will kill the CPU.

Plan your life wisely, you'll be dead before you know it.

Reply 61 of 391, by sliderider

User metadata
Rank l33t++
Rank
l33t++
DonutKing wrote:
There wouldn't be much point to the exercise without turning it on would there? ;) […]
Show full quote
sliderider wrote:
DonutKing wrote:

OK, I will. I have a CRT here and a shoebox full of common 486 CPUs. I'll leave the CPU there for a day or two and test it again. My money is on a perfectly functional CPU afterwards.

Don't forget to turn the monitor on.

There wouldn't be much point to the exercise without turning it on would there? 😉

Anyway, I used 2 CPUs, just old 486DX-33s. One directly in front of the monitor at its base and one at the rear on top of the casing.
I left them there all weekend and left the monitor running - I played a few games on it and when I went out I left it on a screensaver. So that's over 40 hours exposure to the radiation/magnetic field.

As expected both CPU's are perfectly functional. They both boot and I can run 3dbench and speedsys and doom.

I know there is no fancy shielding or anything in this old CRT since I've had it open before to replace the VGA cable. I even pulled out a compass and it goes haywire in these two spots.

So I guess that's myth busted then 😀

Sorry for the derail 😀

And of course none of us has any way of knowing whether you did or you didn't or what the results were if you really did.

Why do you think the government is so afraid of an EMF pulse being used as a weapon? It would disable all electronic devices in the country.

Reply 62 of 391, by feipoa

User metadata
Rank l33t++
Rank
l33t++
sliderider wrote:

Why do you think the government is so afraid of an EMF pulse being used as a weapon? It would disable all electronic devices in the country.

Did you recently watch Golden Eye (James Bond)?

Plan your life wisely, you'll be dead before you know it.

Reply 63 of 391, by SquallStrife

User metadata
Rank l33t
Rank
l33t
sliderider wrote:

Why do you think the government is so afraid of an EMF pulse being used as a weapon? It would disable all electronic devices in the country.

Such an EMP would be many orders of magnitude stronger than anything you could achieve in your home. We're talking the kind of levels found in an atomic explosion.

But regardless, even in the bonkers-fantasyland where the EM radiation from a CRT was that strong, why would I put my CPUs near it? Are you just being obnoxious or something?

VogonsDrivers.com | Link | News Thread

Reply 64 of 391, by DonutKing

User metadata
Rank Oldbie
Rank
Oldbie
sliderider wrote:

And of course none of us has any way of knowing whether you did or you didn't or what the results were if you really did.

UKKqn.jpg

I'm sure you'll find some way to claim this isn't legitimate but this was taken two days ago, the CPU wasn't moved since (until I put it in a motherboard for testing just last night)

I certainly did do the experiment and the CPU certainly did work afterwards.

Why do you think the government is so afraid of an EMF pulse being used as a weapon? It would disable all electronic devices in the country.

you mean an EMP pulse often associated with a nuclear warhead detonation?

Somehow I think a CRT monitor is just a little different to that. We didn't see the Soviets marching through the square with tanks armed with CRT monitors instead of guns and cannon 😀

At this point I think you're just trolling. There's no way a CRT monitor is going to fry your CPU through proximity.

You really need a instantaneous discharge to ground from a capacitative source of positive charge (the human body) for this to work out well. How much charge your body has stored, the distance of the CPU to ground, and through which materials of conduction (from CPU to ground) are all responsible for the successful outcome of this experiment. The CPU should be fairly grounded to receive the maximum about of current discharged from your body (the capacitor). It is the momentary surge in current which will kill the CPU.

You are talking about ESD and yes I agree that can kill components - but as shown in your post its pretty hard to instantly kill something with ESD. This is about an electromagnetic field from a CRT monitor somehow frying a component merely from proximity. It's pretty basic electronics that a magnetic field can induce a charge in a coil, but CPU's are not very coil-like at all so I seriously doubt we'd see this effect - in any case I don't believe it would possibly 'instantly fry' the CPU. At worst you might get some ESD damage but no different to what you'd normally get working on a PC, which can take years to manifest. (and again, I doubt the CPU would build up any charge). Overall the risk to components from a CRT's magnetic field is practically non-existent.

If you are squeamish, don't prod the beach rubble.

Reply 65 of 391, by feipoa

User metadata
Rank l33t++
Rank
l33t++

I've had a lot more circuits die from momentary shorts than from ESD, that is for sure, but I have had some sensitive IC's die from ESD. One recently was a mini real-time clock with temperature compensation (DS3231SN#). I moved the system to a dry location during a client-site installation and was a little careless with body grounding. Luckily I had backup components.

Back to the physics experiment, I suppose there could be some marginally closed loops in the processor, say, with some resistance. It would be pretty planar though. Try shorting all the CPU pins together to increase your chances of a conductive loop and for the least resistance. The more loops a path takes (in close proximity) will increase the induced current - the less resistance, the more current there that will be induced as well.

I cannot even begin to guess how many micro-amperes it would be though. If I recall correctly, you need a changing magnetic field to induce a current in the loop. It may be that the magnetic field out of the CRT is relatively static? So if you're really determined, try rapidly accelerating and decelerating the CPU in all sorts of directions directly above the CRT (remember to short all the CPU's pins). Perhaps try this for 5 minutes.

If I were to try this, I would probably be a little more systematic: Get some really thin wire, perhaps 30 gauge and wind it into, say, 15 loops that are less than 2 mm in diameter. Attach the two ends to an AC meter reading microamperes, or a pico-amp meter if you have one. Set the wire loop on top of the CRT and see if the meter shows any AC current. Then accelerate the current loop in all sorts of directions to see if the meter picks up any current. If you don't get anything in the micro-milli ampere range, I'd say it is very unlikely that you're going to fry a CPU by wiggling it atop a CRT.

This experiment sounds quite exciting to me, but sadly, I tossed out all my CRT's years ago and replaced them with white LCDs. Also, I've traded all my junk CPUs to recyclers in exchange for more rare pieces.

Plan your life wisely, you'll be dead before you know it.

Reply 66 of 391, by feipoa

User metadata
Rank l33t++
Rank
l33t++
feipoa wrote:
As noted above, Cyrix did at least list out, Cyrix MII-400 95/285 3x, Cyrix MII-400 83/292 3.5x, and Cyrix MII-400 75/300 4x in […]
Show full quote

As noted above, Cyrix did at least list out,
Cyrix MII-400 95/285 3x,
Cyrix MII-400 83/292 3.5x, and
Cyrix MII-400 75/300 4x
in the BIOS Writer's Guide, but I have never seen a MII-400GP with anything less than 95 MHz stamped on it. This, however, may have just been a marketing ploy; a CPU running with a 75 MHz bus is less attractive than one running at nearly 100 MHz (95 MHz, in this case). It is curious how the MII-333 came in 66, 75, and 83 MHz variants though. Either they were targeting the upgrade market whereby the motherboard only went up to 66/75 MHz, or they had some PLL/amplifier frequency issues at higher frequencies. It would have been nice if Cyrix stamped all possible bus combinations on the CPU. Considering that all the MII-366 pieces have a 100 MHz bus stamped on them leads to marketing strategies. I still don't have a clear answer on the MII-400GP's bus weirdness. This will take some extensive testing to determine. If I can find a repeatable means to crash the MII-400 at 3x100 and not at 4x75, the issue may be bus PLL multiplication/amplification related with a dependance on internal frequency and/or core voltage. I know I've said this before, but it would be awefully nice to track down a former Cyrix engineer to get some straight answers.

To add to the speculation, I have seen these date codes,

2.9v, MII-366, v7sn8909az

2.2v, MII-366, vbsdd923ag

2.2v, MII-400, vbsdd932ad
2.2v, MII-400, vbsdd930ad
2.2v, MII-400, vbsdd928ad

2.2v, MII-433, vbsdd923ab, Engineering Sample
2.2v, MII-433, vbsdd940at, Production model

9 = 1999
xy digits following 9 = week of the year.

So it seems that when the MII-366 2.2V (250 MHz) model came out, Cyrix was also shooting for the MII-433 (300 MHz) given they have the same week markings. Did it take another 5 weeks to improve things so that the MII-400 could be qualified, and then another 12 weeks to qualify the MII-433? I realise my sample set is small, but that is the only datecode I can find the an MII-433 (production model).

By week 40 1999, the PIII-600 Katmai's were out and the PII-350's were cheap. The AMD K6-III had already been out for 6 months as well. This would have been very difficult to compete against, I think, even at the budget side of things.

Plan your life wisely, you'll be dead before you know it.

Reply 67 of 391, by kool kitty89

User metadata
Rank Member
Rank
Member
McMick wrote:

Just a comment about FPU and Cyrix / AMD / Intel and Winstone 98: I made the mistake of relying on that benchmark to make my purchasing decision when it came time to replace my P166. I could have gotten a pentium 233 MMX, but instead I opted for the K6-233, based on reviews that used Winstone/Winbench 98, Tom's Hardware being prominent among them. We ran that demo in the store I worked at all the time, as it had a loop function. The problem was, and I didn't really understand this until later, that AMD's floating point performance suuuuuuuucked compared to the intel chip. So why did the Ziff Davis benchmarks show the AMD chip as faster than the Intel chip? Because none of the programs used in the Winstone 98 benchmarks used floating point arithmetic. They are all integer-based programs!

Looking at some actual K6 and pentium MMX benchmarks, the FPU performance actually seems comparable, though it would vary by benchmark and application in question. (due to the types of floating point operations used -performance isn't distributed evenly for both FPUs, plus there's the issue of mixed/simultaneous FPU/Integer performance and how well the CPU handles that)

This comparison looks quite favorably on the K6 FPU:
133 MHz Challenge - 5th/6th gen CPU per clock performance

However, as mentioned in several previous posts, there's more to the picture than raw floating point performance too.
There's the huge issue of overall optimization of an application for a specific CPU architecture, and that's a massive issue for games like Quake (which was hand-tailored to the P5 architecture, not just FPU heavy, but catering to pentium specific advantages and avoiding its weak points -while similarly ignoring the trade-offs of 486s, 5x85s, K5s, and 6x86s -let alone 386s).

Quake was the first notable game to do this, but others followed (though it wasn't really routine, and many 1995/96 games still optimized for 486 -or even 386- and thus also tended to favor 5x86/K5/6x86 chips much more -as those ran 486 optimized code much better -again, Tomb Raider and Descent are good counter examples, as would be Wing Commander III and IV -as far as texture mapped polygonal 3D games too)

The bigger issue appeared with API programmed games relying on DirectX, OpenGL, or various early alternative APIs.
And in those cases, performance was largely up to the drivers used and how well those catered to a given CPU. (if only pentium-optimized drivers were available, then you were out of luck, but there certainly would have been the technical possibility for patches/drivers catering to 486/6x86/etc or even using only fixed point libraries -no FPU use at all, heavily catering to the 6x86 and K5's strengths -and to lesser extent, K6, and of course 486 and C6 -and potentially allowing software compatibility with DLC/DRX/SX/386 chips without FPUs at all)
MMX performance could also be a factor for software supporting it. (with either lack of MMX support or lower performing MMX -or different performing, not catering to the same set of trade-offs as Intel's implementation)

And on that note, even if there were patches or alternate drivers available, it would tend to be up to the user to find and install those to improve performance. (the default drivers would almost certainly be Pentium optimized -at least from the late 90s onward, when 486 had fallen out of the mainstream)

As I said in a previous post, I don't have enough personal experience on this issue to say how available such drivers were, or how they performed when they did exist. (it really wasn't a problem for most games and multimedia stuff up through 1996 -Quake being the sole major exception- since games/multimedia software weren't solely optimized for Pentium -and often not optimized at all for Pentium, but more 486 specific)

I was pretty young at the time all of this was going on and I haven't researched further on this specific issue. (at the time my family generally sidestepped it anyway, so my dad's experience doesn't really help -he did a lot of tweaks/patches on a lot of stuff -including stuff like getting beta DVD video drivers to work for our Rage Pro PCI card, but we didn't use the chips/systems that had those major problems with games -we went from 486 to Pentium Classic to K6-2 300 then K6-2 550 and then to various socket 370 and Socket A based systems -and a coupled one offs like the Pentium Overdrive upgrade for my dad's 486 office PC and some slot 1 stuff from his work -and a Pentium Pro- He usually bought from local wholesalers too back then, so the retail markup was avoided -and there were much better than average deals on Intel parts, especially from overstock, which included the Socket 7 Pentiums and Pentium Overdrive -otherwise he'd almost definitely have ended up with a Cyrix or AMD 5x85 -and he did do something like that with a 486 DLC previously)

You technically don't even need an FPU at all to allow those sorts of games . . . it makes life a bit easier on programmers (and allows for better accuracy in some operations), but fixed point math is a quite viable alternative. (for handling matrix math for 3D vertex calculations, shading, texture mapping, perspective correct rendering, etc -and that's still what's used today on embedded/portable systems without hardware floating point support, as well as almost all 3D PC games up t0 1996, all game consoles prior to the N64, and all )
With a 3D accelerator card, only the 3D vertex math would be handled by the CPU with the rest done by the GPU (until hardware TL& came alone -then the CPU didn't even need to do that), on top of running the game engine itself of course. (logic, physics, AI, etc)

The bias on floating point performance of the P5 architecture was a huge catalyst for game programmers to start relying on the FPU for certain operations (since the P5 Pentium's FPU was actually faster at some operations than the ALU -like multiply- and the dual pipelined FPU and superscalar architecture allowed for simultaneous execution of multiple floating point operations as well as floating point and integer operations -the 6x86 only allowed int+int or int+float to execute simultaneously, not float+float).
But FPU usage alone was, again, only part of the overall problem with Pentium optimized code/compilers/libraries/drivers being the real issue. (catering to both integer and floating point operations that worked best for the P5 architecture specifically -so often underutilizing both ALU and FPU of other chips -let alone special features/functionality not present on the pentium)

There's also the issue of motherboard performance, though that's much more of an issue for the Cyrix chips than K6 it seems. (K6 worked better with a wider range of boards)

For AMD chips, the popularity of the massive K6 led to an increase in priority to support architectural-specific optimizations, and especially the introduction of 3DNow! supporting software with the K6-2. (as well as the 100 MHz FSB/L2 cache and improved MMX performance)

Plus, for the Cyrix chips specifically, the PR rating being significantly higher than the clock rate (due to the fast ALU) made it look especially bad compared to the K6 and Pentium while the actual per-clock FPU performance wasn't that much worse. (ie a 3x66/200 MHz -PR 233- 6x86MX should have held up decently well against a Pentium MMX 200 for FPU intensive applications without biased optimization)

feipoa wrote:

I wonder if any of the older (non 100 MHz) 6x86s/MIIs will actually well at 100 MHz bus. (and multipliers to match similar or lower clock rates to their rated speed -so avoiding heat/core stability problems)

A great question. I should test for this. Considering that the Cyrix 5x86 seems to run OK at 2x66 MHz implies there is some wiggle room with the buses. The MII-400 might be wiggled out though. It is clear to me that they were gunning for a 100 MHz bus. Perhaps the MII-400's are just nice performing MII-366 2.2V pieces and some new growth/yield strategies were implemented for the MII-433? Sometimes little things like humidity and temperature combinations during wafer growth can make a break your yields.

I wonder how many overclockers attempted to just increase the bus speed on Cyrix chips.
Most of the (non 2.2V) 6x86/MII family chips were known to be poor overclockers (already near their limits at rated speeds -and relatively hot running as well), so increasing the core frequency wasn't very practical, but increasing the bus speed could have been more attractive. (at least from the heat generation standpoint -no higher core clock or voltage to increase heat dissipation)

I was actually wondering about that while reading the Red Hill CPU descriptions. They did specifically mention overclocking the K6 300 to 3x100 (and that it worked about 2 out of 3 times), and Cyrix was lagging in getting 100 MHz bus parts out in general (and 83 MHz -and to lesser extent 75 MHz- on many motherboards tended to be finicky, while 100 MHz was not).

A 2x100 MHz MII probably would have fared reasonably well against a K6-2 300 (for integer and I/O performance at least), and probably would have merited the "300" designation much more than the 3x75 and 3.5x66 parts.

I think you would enjoy reading the Ultimate 486 Benchmark Comparison and Cyrix 5x86 Register Enhancements Revealed (links are on my signature); it will answer some of your questions. There are easier to read PDF files in those links which make for great bedtime stories. Enabling the register feature FP_FAST on the Cyrix 5x86 boosted FPU performance by an average of 18%, and 39% in some tests. The average FPU performance of a 5x86-133 bests the POD83 by about 10 Pentium ratings (so the differance between a P90 and a P100). The exception seems to be with Quake, whereby a properly configured Cyrix 5x86-133 scored 18.4 FPS and the POD83 scored 20.8 FPS. If the Cyrix 100/120/133 data is linearly extrapolated to 150 MHz, a Cyrix 5x86-150 would marginally beat the POD83 in Quake. When overclocking a Cyrix 5x86, it is important to keep in mind that your mileage will vary; not all Cyrix 5x86 next generation features overclock well, however my tests have shown that the important ones seem to overclock.

Yes, that is very interesting and informative, and it addresses a lot of what I was wondering about except the issue of the 6x86 vs 5x86 FPU.

This is what the Ultimate 586/686 Benchmark Comparison will tell us. I plan on including the Cyrix 5x86's in this comparison since they are a sorta 486/686 hybrid. I'm pretty sure the 6x86's will win clock-for-clock for ALU performance since they included two, as opposed to one, integer unit. I also plan on running the 5x86-133 at 2x66 MHz to be a fairer comparison with the 6x86-133 MHz. I'll use the same sticks of RAM, graphics card, etc to be as consistant as possible.

The 6x86 should smoke the 5x86 for clock per clock ALU performance (it's got the dual pipelines and dual integer units for superscalar operation -plus added features not present of buggy/disabled on the production 5x86)
I/O performance should also be better due to the 64-bit bus of the 6x86. (obviously more so for comparing a non-overclocked 33 MHz bus 5x86 vs 66 MHz bus 6x86)

The FPU would definitely be the questionable part though. If the full 6x86 FPU was simple/small enough (in transistor count), they may have implemented it in its entirety in the 5x86 core. If so, that would mean 5x86 and 6x86 parts would have roughly similar per-clock FPU performance (with some variables due to I/O performance) and thus better FPU performance relative to PR ratings of 5x86 parts. (ie an 80 MHz 6x86 vs 120 MHz 5x86)

And this would also make that hypothetical Socket 5/7 5x86 even more interesting. (smaller chip, higher yields, higher clock speeds, and much better matched FPU+ALU performance to the Pentium relative to PR ratings than the 6x86 or k5)

Cyrix was targeting super low power consumption and reduced transistor count with their 5x86 series, which is probably why the die size is so small and the yields so high.

This very much matches the target of the Winchip as well. (though the 5x86 is more advanced than the Winchip -aside from the smaller cache- with generally better per-clock integer and FPU performane)

It definitely would have been interesting to see what Cyrix might have done with a socket 5/7 5x86 derivative to complement the 6x86 in the low end (after they moved away from socket 3 boards).
A small-die, cool-running, high yield part topping at higher clock speeds than contemporary 6x86 parts (with lower PR ratings), but with potentially stronger FPU performance and better matched FPU/ALU performance to the Pentium. (it would have been really weird if the 5x86 ended up becoming a better option for later FPU-intensive games because of that -which really wasn't the case for the Winchip)

I'd have liked to see Cyrix continue with the 5x86 chips like AMD did. The AMD X5-133 is what kept AMD profitable while developing the K5/6. If Cyrix had the resources to simultaneously develop the 5x86 into 150, 160, and 200 MHz pieces, perhaps their fate would have been different. As the literature mentioned, Cyrix had yield issues with their 6x86's and couldn't keep up the pace. Had there been 160-200 MHz Cyrix 5x86's in Q1/2 of 1996, I don't think many people would have bothered buying Pentiums until the PII's came out.

From a business standpoint, moving on to Socket 5 made a lot of sense (catering to the higher end sector tends to have much higher profit margins -even for the relatively aggressive pricing of Cyrix compared to Intel).

For users, longer production and development of the 5x86 would obviously have been very attractive, but if anything, it probably would have made more business sense to invest in a socket 5 based 5x86 than long-term support for socket 3 models. (even at even lower/more aggressive prices than 6x86 parts, bottom-end socket 5 parts could have had considerably higher margins than comparable socket 3 parts -plus wider and officially bus speeds to work with, faster chipsets, faster RAM, etc)
Or, if they did support socket 3 a bit longer, it still could have made sense to move a 5x86 derivative into socket 5/7 as well.

Albeit, had they actually done that, it begs the question, what would have happened when the MII/MX was replacing the 6x86. (would there have been a 5x86 MX followon directly based on the 5x86 core but with MMX added -and maybe a larger cache- or would it make more sense to just use a small cache version of the MII instead -or just pushing the die-shrunk vanilla 6x86 into that role, though MMX suppport was significant and a 5x86+MMX part might have been more efficent)

Indeed, it would have been more ideal for Cyrix if game makers targeted the MII's strengths, but if I was a game maker, I'd still probably want to appease the big buy, Intel.

While 486s were still significant (or 386s still relevant for that matter), the issue was more open-ended though, which is why a the majority of games even up through '96 were 486 optimized or at least catered to 486 as well as Pentium.
And many offered more flexible detail settings than Quake as well. (again, you can't even disable perspective correction in quake -which is a substantial chunk of floating point overhead- whereas Tomb Raider does allow that and can also run with no FPU present at all)

Again, I wasn't suggesting game developers specifically cater to the 6x86's design quirks specifically (like Quake does for the Pentium), but just a generalized support for non-pentium parts (486/5x86/6x86/K5).
And the same goes for drivers for graphics APIs too. (though, again, I'm not sure such support wasn't actually available in many cases -I'm not sure one way or another -and the main period when this would have been important would be prior to 3DNow! and SSE becoming popular -as floating point performance became incredibly attractive with both of those . . . albeit hardware T&L became common just a couple years after that, rendering that less important for graphics and more important for multimedia sound/video acceleration and physics computations -though much of that has since been offloaded to GPUs as well)

Last edited by kool kitty89 on 2012-02-20, 20:50. Edited 1 time in total.

Reply 68 of 391, by feipoa

User metadata
Rank l33t++
Rank
l33t++

There's the huge issue of overall optimization of an application for a specific CPU architecture, and that's a massive issue for games like Quake.

I wonder how game programmers actually did this. Wouldn't the CPU-specific optimisations be done in assembly language, whereas the game programmers would write the code in some other high-level language (e.g. C++)? When writing in high-level languages you are at the mercy of your compiler to Pentium-optimise, so it would almost seem like the programmer would need to be aware of 1) specific CPU architecture; 2) compiler routines and optimisations for that architecture; 3) what sequence/structure of high-level code is optimised for the specific CPU architecture given compiler-specific routines.

In high-level languages, the use of consecutive if/then/switch statements, for example, can usually be computed much more quickly if the CPU uses branch prediction. The trade-off is that some infrequent code needs to stall the pipeline. The other obvious code which translates directly from the high-level language to the CPU is heavy use of decimal numbers and decimal math (FPU), that is, as opposed to using integer division with modulus (ALU) to work out a single decimal point (or more) via the remainder.

The 6x86 should smoke the 5x86 for clock per clock ALU performance

Overall, the 5x86-133 with features enabled and running at 2x66 MHz equated to a Pentium 100. Overall, if the Cyrix 6x86-100 MHz (120 PR) chip is anything like Pentium 100, then there is probably a 33% leap (frequency-wise) with the 6x86. Aside from the differences already mentioned earlier (parallel integer pipelines and simultaneous execution of int/int or int/fpu), the 6x86 also has a 256-byte instruction line buffer, whereas the 5x86 has a 48-byte buffer. The 6x86 also has a 128-entry cache for the translation lookaside buffer, whereas the 5x86 is only 32-entry.

Some benchmarks will certainly translate the specs into a more meaningful comparison.

Given the relatively poor FPU performance of the MII/MX, I suspect there may not be much improvement in FPU performance of the 6x86 with the exception of simultaneous FPU/ALU calculations. Perhaps a 5% per-clock boost?

Plan your life wisely, you'll be dead before you know it.

Reply 69 of 391, by kool kitty89

User metadata
Rank Member
Rank
Member
feipoa wrote:

There's the huge issue of overall optimization of an application for a specific CPU architecture, and that's a massive issue for games like Quake.

I wonder how game programmers actually did this. Wouldn't the CPU-specific optimisations be done in assembly language, whereas the game programmers would write the code in some other high-level language (e.g. C++)? When writing in high-level languages you are at the mercy of your compiler to Pentium-optimise, so it would almost seem like the programmer would need to be aware of 1) specific CPU architecture; 2) compiler routines and optimisations for that architecture; 3) what sequence/structure of high-level code is optimised for the specific CPU architecture given compiler-specific routines.

To some extent, yes, which is why you'd want to select an appropriate compiler (or write a custom compiler -which several developers did do; several even made proprietary APIs for their games in the early 3D accelerator era -like Argonaut's BRender).

However, even if blindly using a compiler, you can still do a lot of manual optimization in C/C++ by using specific operations (like integer rather than floating point) as well as creating a game with variable detail settings to cater to a wider range of systems. (both in overall performance and architecture-specific performance)

And, while most PC games programmers had stopped coding games entirely in assembly by the mid 90s, it was still much more reasonable to expect some hand-tuned assembly language tweaks after the code was compiled.

But again, then there's the separate issue of games relying on standard/semi-standard APIs (for both software and accelerated rendering) which were at the mercy of available drivers for those APIs. (be it glide, openGL, directX, or a few less common ones like S3D) That's aside from the few cases of custom APIs (like Argonaut did), where the developers had direct control over the API design and driver development.

On the plus side though, those standard APIs would have had the potential to generally simplify support for various CPUs as well (since you'd only need a global driver set rather than programming specific to each game), but that would still require actual support provided for such drivers or patches to similar effect. (and, again, I'm not sure if anything like that was available or not -but given the relatively high numbers of late gen 486/5x86 users combined with AMD/Cyrix/IDT Socket 5/7 CPUs with heavy integer-bias, it would have made plenty of sense to offer that sort of support -even if the Pentium was the de facto mainstream option)

In high-level languages, the use of consecutive if/then/switch statements, for example, can usually be computed much more quickly if the CPU uses branch prediction.

The trade-off is that some infrequent code needs to stall the pipeline.

This issue should still favor the Pentium class Cyrix and AMD CPUs, but older parts would be more problematic. (ie 386/486 optimized compilers would handle that differently)

The other obvious code which translates directly from the high-level language to the CPU is heavy use of decimal numbers and decimal math (FPU), that is, as opposed to using integer division with modulus (ALU) to work out a single decimal point (or more) via the remainder.

A more convenient approach is to use fixed point decimal notation rather than plain integers (computationally the same, but easier for the programmer to work with), but C and C++ don't natively support fixed point decimal notation for integers as such, so programmers would have to work around that. (still not that big of a problem thoguh)

The actual technical disadvantage (CPU-specific performance quirks aside) of hardware floating point match is better accuracy (fewer rounding/truncation errors) than with fixed point or integer math. (but for the needs of most games, this still isn't a significant problem)

Overall, the 5x86-133 with features enabled and running at 2x66 MHz equated to a Pentium 100. Overall, if the Cyrix 6x86-100 MHz (120 PR) chip is anything like Pentium 100, then there is probably a 33% leap (frequency-wise) with the 6x86. Aside from the differences already mentioned earlier (parallel integer pipelines and simultaneous execution of int/int or int/fpu), the 6x86 also has a 256-byte instruction line buffer, whereas the 5x86 has a 48-byte buffer. The 6x86 also has a 128-entry cache for the translation lookaside buffer, whereas the 5x86 is only 32-entry.

The 100 MHz 6x86 is supposed to be slightly faster than a Pentium 120 in integer performance (I/O performance would probably be weaker due to the 50 MHz bus), and of course the 133 MHz 6x86 was rated to be slightly faster than a Pentium 166 in ALU performance (and, of course, ran on a 66 MHz bus)

So that would seem more like a 1.6-1.7 times the per-clock integer performance increase over the 5x86. (albeit the 64-bit bus and larger L2 caches -with pipeline burst support- probably impacted that too)

This would be the question: could the 5x86 core (with full functionaility enabled and implemented in socket 5/7) have actually performed similarly to the full 6x86 core around the same time? (ie could yields have been high enough to allow ~60% faster clock speeds than the 6x86 offered at the same time historically -while also offering lower production costs due to the smaller die)
And, of course (in hindsight) the higher clock speeds and proportionally higher FPU performance would also be quite significant.

This would be a pretty big assumption though, since even with the smaller/cooler die used (and relatively higher yields), it wouldn't automatically mean clock speeds of the 5x86 core could scale up fast enough to match what the 6x86 managed. (at the time of the 6x86's initial release, the 5x86 certainly had considerably higher clock speeds, but it's possible that the 5x86 might have hit a wall -on the then-current process at least- while the 6x86 still had proportionally more room to scale up to better performance at lower clock speeds)

If it was mainly the relatively large/hot core of the 6x86 that limited its clock speed increases, that would definitely be major advantages for the smaller/cooler 5x86 (on similar process tech). And given the longer (7 stage) pipeline of the Cyrix chips, clock speeds technically should have scaled up better than the P5 (5 stage) parts, though other design aspects also could have limited that aside from the heat problems. (given the much lower power dissipation of most contemporary P5 parts vs 6x86, it's a reasonable guess that thermal dissipation was the primary limiting factor on the Cyrix parts -which would mean the 5x86 could have been very promising for attaining higher clock speeds)

It would definitely be interesting to gain some insight from the Cyrix engineers (as you already mentioned). I'd imagine they must have considered the trade-offs/advantages of using the smaller 5x86 core over the full 6x86, so there's probably a good reason that they didn't. (or there were other reasons that made more sense at the time, but may not have panned out as well in hindsight)

Given the relatively poor FPU performance of the MII/MX, I suspect there may not be much improvement in FPU performance of the 6x86 with the exception of simultaneous FPU/ALU calculations. Perhaps a 5% per-clock boost?

5% is also what the review on Realworld Tech mentioned for raw FPU performance of the MII vs MI parts (while ALU performance was more of a 15% improvement for initial MIIs -this apparently increased a bit further later on, particularly around the time of the 366, which came after Realworld Tech ceased updating unfortunately).

Reply 70 of 391, by feipoa

User metadata
Rank l33t++
Rank
l33t++

I think you should source some Cyrix 5x86/6x86's, some motherboards, and start playing around. Nothing like actual lab work to solidify this peculiar hobby.

An IBM 5x86C-100GF just came in today. It has a date code of week 38, 1996, which is the latest I've seen for an IBM/Cyrix 5x86.

Plan your life wisely, you'll be dead before you know it.

Reply 71 of 391, by kool kitty89

User metadata
Rank Member
Rank
Member
feipoa wrote:

I think you should source some Cyrix 5x86/6x86's, some motherboards, and start playing around. Nothing like actual lab work to solidify this peculiar hobby.

An IBM 5x86C-100GF just came in today. It has a date code of week 38, 1996, which is the latest I've seen for an IBM/Cyrix 5x86.

I may do this at some point, but probably not super soon. It's probably be a fun project to work on, but I've definitely got a lot of stuff in line ahead of that (aside from college and real-life responsibilities at home, there's several other hobby projects that are ahead of that 😉).

I'll keep an eye out for good deals though, funding stuff on the cheap is always more fun. (and between the scrap gold sale lots -thankfully mostly broken/damaged parts at least- and the overpriced BIN collector's sales listings, there's not as many good options on ebay as there used to be, but I'll keep an eye out -oddly enough, in terms of Cyrix stuff, the cheapest/most common working parts up on ebay recently seem to be the 2.9 V 250/100 MHz PR366 -I didn't check at weirdstuff; I wasn't really looking for CPUs when I went there recently, though I'll be sure to check next time)

Anyway, as it is right now, I do have a working Baby AT tower with an FIC 503A (currently a K6-2/550 iirc -need to check again), so as long as that holds up, I should have a pretty good performing SS7 board. (albeit it's the 512k cache version, so some benchmarks might fare worse than the 1MB version).

I think we got rid of all our pre-socket 7 boards years ago, but there might still be some in storage too. (don't see any of the socket 3 or older CPUs around either -aside from a 286-12 and 286-20, but just a couple socket 4 Pentium 66s, socket 5 100, 133, and 166, a 180 MHz PPro, a K6-2 300, the 550 in the working box, some slot 1 pIIs and IIIs, and a smattering of socket 370 and socket A parts -I know we used to have several 486s, a Pentium Overdrive, and my dad had a 386 and DLC before that, but I'm not sure if we kept any of those -I'm thinking most of the stuff we didn't keep either got fried or got traded/sold to friends or local dealers)

If I wanted to do any testing of socket 3 stuff, I'd obviously need to hunt some of that down (boards and CPUs).

Oh, and by the way, that friend of mine I went to WeirdStuff with is a member on here too (Apolloboy), and (again) he probably has a somewhat better idea of what they'd had recently. (though I'm not sure what stuff he was more paying attention too -after he got his socket 7 pentium 133 DOS/Win3.x gaming rig together, he's mainly been keeping an eye out for obscure sound cards -especially an LAPC-I and some video cards -including voodoo 1s or 2s; oh, and he picked up an Amiga 500 there a couple months back too 😉)

And, on a more nostalgia-oriented Cyrix appreciation topic, it turns out that my family never had a 6x86 system (I mentioned part of that in a previous post). I'd thought we'd had a 6x86 somewhere in the mid/late 90s, but after asking my dad about it, it turns out his home office/workstation PC and our shared family PCs never used 6x86s and we were using Socket 5/7 Pentiums for much of that time (bought through wholesale) and then K6-2s prior to several socket 370 and socket A machines. (and socket 3 prior to that with -probably- a DX4100 in the family PC and a Pentium Overdrive in his PC at the last point prior to upgrading)
The only Cyrix chips he remembers using was the old 486DLC prior to getting a socket 3 board (and before we got a shared family PC -so it would have been in my dad's work/office/game machine). I don't think I ever actually used that machine, though. (so I may not have ever actually used a Cyrix based PC . . . yet 😉)

Also, on the issue of the FIC VA-503+'s reliability problems: any idea on what specifically tends to go wrong with it.
Looking at old reviews, it tends to get generally high performance and reliability ratings (though consistent criticism of the confusing jumper set-up and errors in the early manuals). The early revisions (pre 1.2) also lacked the higher bus speed modes (beyond 100 MHz) it seems. (the 503A seems to have improved that and also switched largely to DIP switches rather than jumpers)

Looking at forums/message board user comments on problems though, it seems that the BIOS dying tended to be a major issue, and even worse sine some models had the BIOS soldered in rather than socketed, so replacement was a pain.

Reply 72 of 391, by feipoa

User metadata
Rank l33t++
Rank
l33t++
feipoa wrote:
I've had a lot more circuits die from momentary shorts than from ESD, that is for sure, but I have had some sensitive IC's die f […]
Show full quote

I've had a lot more circuits die from momentary shorts than from ESD, that is for sure, but I have had some sensitive IC's die from ESD. One recently was a mini real-time clock with temperature compensation (DS3231SN#). I moved the system to a dry location during a client-site installation and was a little careless with body grounding. Luckily I had backup components.

Back to the physics experiment, I suppose there could be some marginally closed loops in the processor, say, with some resistance. It would be pretty planar though. Try shorting all the CPU pins together to increase your chances of a conductive loop and for the least resistance. The more loops a path takes (in close proximity) will increase the induced current - the less resistance, the more current there that will be induced as well.

I cannot even begin to guess how many micro-amperes it would be though. If I recall correctly, you need a changing magnetic field to induce a current in the loop. It may be that the magnetic field out of the CRT is relatively static? So if you're really determined, try rapidly accelerating and decelerating the CPU in all sorts of directions directly above the CRT (remember to short all the CPU's pins). Perhaps try this for 5 minutes.

If I were to try this, I would probably be a little more systematic: Get some really thin wire, perhaps 30 gauge and wind it into, say, 15 loops that are less than 2 mm in diameter. Attach the two ends to an AC meter reading microamperes, or a pico-amp meter if you have one. Set the wire loop on top of the CRT and see if the meter shows any AC current. Then accelerate the current loop in all sorts of directions to see if the meter picks up any current. If you don't get anything in the micro-milli ampere range, I'd say it is very unlikely that you're going to fry a CPU by wiggling it atop a CRT.

This experiment sounds quite exciting to me, but sadly, I tossed out all my CRT's years ago and replaced them with white LCDs. Also, I've traded all my junk CPUs to recyclers in exchange for more rare pieces.

Since nobody took me up on this little experiment, I tried it myself. I created a little inductor using the smallest insulated wire I had, which was 26 guage. I wrapped it 10 times around a paper clip, then removed the paper-clip so that air can serve as the insulative medium. I connected the inductor ends to an AC micro-amp multi-meter. The off state seems to read 0.1 uA (photo). When i brought the sensor into contact with the LCD (all positions), the current did not deflect. It did not deflect near unshielded 10 A AC wires either, nor around a running microwave. This would make sense if these devices did not emit a magnetic field. A fan, with a rotating armature, would certainly emit a magnetic field. I brought the sensor up close to a honeywell fan set on high and I read up to 2.0 uA on the multi-meter. Keep in mind that this is under short-circuit conditions! The current would certainly drop with any resistive load.

2 micro-amperes will certainly not short-out a CPU, and that is with 10 turns at about 1 - 1.5 mm spacing. I have no idea how many low resistance loops there might be in a CPU and what the spacing would be. Now, if someone who owns a CRT monitor can repeat this test with a similar inductor and using a CRT, we could see if the CRT generates a significantly larger magnetic field than a common house fan.

Taking a Cyrix 5x86 as an example, it can safely handle over 4 watts, or 1.2 Amps at 3.6 Volts, but that is for entire heat generation averaged through all running wires. I am not sure what the current rating of an individual trace would be, but even at 2 uA and say, 1 ohm of resistance, any CPU should be able to disipate such little heat (micro-watts) indifinately, and without a heatsink. Until someone has CRT results, I am very doubtful that a CRT would kill a CPU.

Attachments

Plan your life wisely, you'll be dead before you know it.

Reply 73 of 391, by kool kitty89

User metadata
Rank Member
Rank
Member

On the issue of high voltage ESD damage on chips. It's most definitely a real problem, but it's not a super common occurence, and ESD alone won't always kill an IC either. (it depends more on what leads/traces receive the shock)
In all his years of working with PCs, I think my dad has only killed one CPU by static discharge, and that was quite recently actually (killed a Socket A Semperon 3200+ while working on the board -it was installed too, he wasn't handling it directly -may have fried the board too, we still need to reflash the BIOS to check)

That said, I really don't see how the proximity to a strong magnetic field would damage an IC (as long as it's not something like FeRAM -which won't get ruined either, but will get the data scrambled).
We're not talking high voltage electrostatic discharge here, just an EMF field. (and note, on the issue of EMP bursts, it IS high voltage static discharge that kills electronics -the intense electromagnetic field generated in the pulse causes arcing discharges within circuits, like putting a chip in a microwave oven)

Well . . . OK, if said "strong magnetic field" is generated in the microwave region of the EM spectrum, then yes, it could kill ICs, but it would also be quite dangerous for humans (or almost any other living things) to be around either. (definitely NOT the sort of thing leaking from a CRT) 😉

Reply 74 of 391, by feipoa

User metadata
Rank l33t++
Rank
l33t++

GA5-AA just sold for $33 + $26 shipping (=$59)
http://www.ebay.ca/itm/220955094298

I was watching this item, but there's no way I was going to pay $60 for a high risk item, maybe $30 shipped. It looks like all this chatting in forums is raising the price of this stuff.

For anyone who bought this board, or for anyone else who has a ALi chipset motherboard, can you test your Cyrix MII in it using Landmark's speed test and PCPBench? Much appreciated.

Plan your life wisely, you'll be dead before you know it.

Reply 75 of 391, by kool kitty89

User metadata
Rank Member
Rank
Member
feipoa wrote:

GA5-AA just sold for $33 + $26 shipping (=$59)
http://www.ebay.ca/itm/220955094298

I was watching this item, but there's no way I was going to pay $60 for a high risk item, maybe $30 shipped. It looks like all this chatting in forums is raising the price of this stuff.

Heh, it's probably more likely to do with the fact that it's in mint new in box condition. 😉

Reply 76 of 391, by sliderider

User metadata
Rank l33t++
Rank
l33t++
kool kitty89 wrote:
feipoa wrote:

GA5-AA just sold for $33 + $26 shipping (=$59)
http://www.ebay.ca/itm/220955094298

I was watching this item, but there's no way I was going to pay $60 for a high risk item, maybe $30 shipped. It looks like all this chatting in forums is raising the price of this stuff.

Heh, it's probably more likely to do with the fact that it's in mint new in box condition. 😉

Yeah, having all the cables and connectors and the manual (that's very important when dealing with old boards) adds a lot to the value. It adds even more when there's software CD's that originally came with the board.

Reply 77 of 391, by kool kitty89

User metadata
Rank Member
Rank
Member

I got a 2.9 V Cyrix MII 366 (250/100 MHz) in the mail today. I was a bit of a mess with old thermal paste, but cleaned up nicely. I probably won't be able to test it for a while, but it's neat to have around in the mean time.

Yes, I know it's one of the common models (albeit the fastest of those), but as I said before I'm doing this on the cheap for now, and that model has been going pretty cheaply on ebay recently and it should be fun to test out in my VA503A if nothing else. (it was also incorrectly listed as an MII 386, though there's another 2.9V 366 on ebay for around the same price right now -unless there's a bidding war in the next day before it closes 😉)

It also came with a heat sink, but no fan mounting (it's a rather large heat sink as far as socket 7 fare goes, so maybe it relied on a case fan for cooling).

One odd thing I found while cleaning the top of the chip is some sort of fabric patch material embedded in the thermal paste. (I've never head of that being used before)

And regardless of collecting stuff on the cheap for now, I'll certainly keep an eye out for any of the rarities (especially at bargain prices 😉), including 2.2V MIIs and late model 5x86s. (if not for myself, I'm sure I could find someone else interested if I find a good deal on something locally)

Also, looking around a bit more, it seems like the 250 nm Cyrix-NatSemi MIIs were 2.2V too, so the 2.9 V chips should be 350 nm parts. (unless there's some transitional period where 250 nm parts were rated for 2.9V too -especially if yields were low initially -and I'm definitely seeing several references to 250 nm 300 and 333 parts, even though all of those seem to be 2.9V)

The MII 366 seems particularly tricky as it seems to have been built in 350, 250, and 180 nm at different points. (any 350 nm ones would obviously be 2.9V though)

Reply 78 of 391, by jaqie

User metadata
Rank Member
Rank
Member

im dead-tired so this post will be short and very oversimplified. CRTs have a degaussing coil in them and many auto degauss on power on, this will produce a far stronger EM field than anything made while it is running normally. If you run a whole system on top of a monitor and repeatedly hit degauss (one with manual degauss) it would wear on the crt some but would show the system to still be running just fine. You could do this with many different mobos even, and under load as well (a timedemo loop in a game would be good, or maybe prime95 or furmark). anything large enough to cause damage over time would definitely also cause errors in a running and loaded down system.

Reply 79 of 391, by kool kitty89

User metadata
Rank Member
Rank
Member
jaqie wrote:

im dead-tired so this post will be short and very oversimplified. CRTs have a degaussing coil in them and many auto degauss on power on, this will produce a far stronger EM field than anything made while it is running normally. If you run a whole system on top of a monitor and repeatedly hit degauss (one with manual degauss) it would wear on the crt some but would show the system to still be running just fine. You could do this with many different mobos even, and under load as well (a timedemo loop in a game would be good, or maybe prime95 or furmark). anything large enough to cause damage over time would definitely also cause errors in a running and loaded down system.

In that case, for chronic damage, I'd expect magnetic disks and drive heads (HDD and FDD) to be the parts generally affected.
Other components (namely ICs) would tend to be damaged only be accute electric discharge rather than chronic/intermittent exposure to high strength EMF fields. (or if FeRAM is used instead of EEPROM/Flash for the BIOS, then chronic exposure could certainly be a problem)

On that note though, the intense discharge during degaussing could still be significant for damaging ICs as it could induce static build-up and high voltage discharge under the right conditions.
Albeit, any such hardware exposed to that would need to be really close and have a suitable grounding source of opposite polarity. (ie if you had an CPU sitting on a monitor when it powered on, and then went to pick it up, you could provide a grounded/opposite polarity source that would result in a discharge that could damage the chip -ever touched the screen -or case- of a CRT just after being turned on, and gotten a shock or felt the static build-up?)