I think a comparison focusing on maxing out a given motherboard socket/bus architecture (or even chipset range/generation) would be more the sort of topic that would fill some of the qualities listed in zapbuzz's post.
As it is, it's often the bus design and/or chipset limitations that skews some of the performance comparisons here, and some of those are a compromise, too (you might be limited to choosing between faster DRAM performance vs board level cache speed vs cache vs PCI and other I/O bus performance, and then tolerance to overclocking on top of that). Some of those are motherboard-model specific limitations, some are actual chipset ones, and some are just luck of the draw on an individual board/chipset used. (especially as far as running things out of spec)
The Cyrix Media GX gets a particularly raw deal as far as limited motherboard chipsets go and lack of board-level cache support.
To be fair there, you'd probably need to go for a comparison disabling all external cache: in which case you'd also tend to get more emphasis on CPU and chipset DRAM access/throughput performance as well as the size and performance of on-chip cache and FSB overclockability and/or tight DRAM timing tolerances. (you might also get some weird cases there where otherwise slow chipsets have relatively good performance with the board level cache disabled along with DRAM timing tweaks: this seems to be the case with the OPTi 495SX 386/486 chipset, though it doesn't show much when 1x multiplier 486 or 386/486DLC CPUs are installed and I have no DRX-2 CPUs to compare)
I think the point of this thread's benchmarking efforts was mostly for comparison purposes between mid/late 90s era x86 CPUs (ie from late gen 486 socket CPUs to early gen Coppermine and Athlon CPUs).
One specific part of the U686BC here was comparative clock for clock performance.
As far as pushing a given socket/FSB architecture as far as it can go (or perhaps x86+IA32+extended instruction set spec range), that's usually going to be down to overclocking to parts of the chipset along with cache and/or DRAM as well as the CPU. And in that case you could end up favoring somewhat lower performance chipset and/or CPU per clock in favor of ones that tolerate overclocking the most.
To that end, the issue of both chip binning and market segmentation also come into play. And in addition to what was already said above, there's also some manufacturers who tend to be more conservative with their ratings than others and thus end up with parts that more often run fine when set further out of their specified tolerances than typical. (with average or less conservative ratings, you'll still usually have some extra room to push things if you keep temps below the typical 70C spec among other things)
On top of that, you have official speed and voltage ratings that were sometimes limited to practical power consumption and cooling limitations (or expectations) of the day. On the power end, you could have that limit on the power supply or also board-level voltage regulators (though for old, strictly 5V boards you don't normally have those ... and then you have some Socket 4 pentium boards with voltage regulators boosting things to 5.25 or 5.27 volts)
You get into overvolting in the 486 era (though I suppose there may have been some 5.5-6V PSU/board mods for older 5V systems, too) and then more of that introduced with Socket 7 or at least the ability to tweak voltages on many boards, sometimes just to officially used settings, sometimes specifically for tweaking.
With earlier Socket 5 (and some early socket 7) boards, you didn't have much flexibility there with just the 3.3 or 3.4 or 3.52V core and I/O spec. There may be some boards that had 3.6-4.0V settings, but I haven't seen any of those myself. OTOH, voltage regulator mods often aren't that complicated and were probably done for overclocking purposes back then. (and I think most of the chip processes down to 350 nm were fairly 4V tolerant with sufficient cooling, but that could vary by manufacturer and might also be limited by external surface-mounted components like resistors onboard the CPU package)
I believe Cyrix and Nexgen both used similar IBM manufacturing processes at one point, though I think Cyrix got in earlier with the .65 micron process and NexGen with .5 micron (but maybe they also worked with .65 micron parts) and Cyrix officially rated parts up to 3.7V (with some 5x86 models) and NexGen consistently used 4.0V ratings on their Nx586, including the later .35 micron parts from what I've read.
However, Cyrix never went beyond the 3.52V rating for the 6x86 and IBM tended towards a more conservative 3.3V with the 5x86 and 6x86 (though I think I've seen some 3.4 or 3.5V IBM 6x86 parts). Cyrix and IBM also used the same .35 micron process for both late production single-rail 3.3-3.52V rated parts and 2.8V 6x86L (and some 2.5V mobile versions) and while it's possible the SMDs on some 6x86Ls are out of spec at 3.5V, they appear to tolerate 3.5V for extended periods in my experience. (similar to overvolting Intel P55C parts, except there's some late production .28 micron Intel P55Cs that might be more vulnerable)
I'm pretty sure many of Cyrix's ratings were limited my power consumption and heat dissipation issues as well as willingness of motherboard manufacturers to officially support higher power dissipations at the time, but the few Socket 7 and Super 7 boards I've tried tend to overclock 6x86Ls quite well and all my PR-200s will work in Windows 98 at 3x66 or 3x68 MHz at 3.3 to 3.5V. (with late SS7 or Socket 370 or 462 era heatsinks+fans)
That might also be a market segmentation thing, since the M2 core (6x86MX) may have been on the market by the time the 6x86L hit reasonable yields at boosted voltages, but then they also stuck with 2.7 to 2.9V with the 6x86MX when AMD resorted to the 3.2V K6-233. And as far as I can tell, the early 6x86MX parts used the same (or similar) .35 micron process as the 6x86L though was also a larger chip than the 6x86L or .35 micron K6, but I'd think a 3.2 or 3.3V M2 at 3x66 would've been in the ballpark of what AMD was doing and avoided needing boards (and RAM) with 75 or 83 MHz support at the expense of needing sufficient power consumption/supply tolerances.
I actually haven't tried overvolting and overclocking early Cyrix/IBM M2s, so maybe they don't even do as well as the M1 there due to the design changes/tweaks or maybe just the cache sort of like AMD's troubles with the K6-III.
Except in that case they missed an opportunity at selling factory overclocked M1s to the 1996/1997 gaming market, for Tomb Raider or Quake-like FPU-requisite games and just really demanding ALU-only games, especially with SVGA res modes. That and maybe modifying the M2 core to support decoupled CPU and FPU clock multipliers. (the FPU supposedly didn't change at all and certainly doesn't seem to have trouble at 200+ MHz in the M1, plus made up an increasingly smaller portion of the CPU die area and power/heat dissipation from 486/5x86 to M1 to M2)
What's weirder is that Cyrix and IBM subsequently bumped the 2.7 or 2.8V rating of early 6x86MX parts to a consistent 2.9V for the Cyrix MII and later gen IBM 6x86MX parts and did so both for late production .35 micron and all .25 micron parts (except low voltage mobile and embedded rated ones) in stark contrast to AMD's 2.2 to 2.4V rated .25 micron K6/2/3 and Intel's 2.0V rated .250 nm Pentium II, Celeron, PIII. But again, that doesn't automatically mean AMD and Intel parts of that sort can run safely at 2.9V as the processes might differ in other ways or simply the SMDs used external to the die don't tolerate that or might tolerate 2.8V or 2.9V with generous cooling but might slowly die and might instantly die a bit higher than that. (and with PIIs and Katmai IIIs you have the external cache further complicating potential failure modes)
National Semiconductor 250 nm M2 cores were also 2.9V rated, but then you have the 2.2V rated 180 nm parts that made up the end of the line for Cyrix MII production. (which is very high voltage for 180 nm parts, and I'm not sure if there's significant differences in construction than with AMD or Intel 180 nm parts, aside from AMD's copper interconnect, or if the relatively modest clock speeds and power dissipation just made that sort of voltage rating attractive for yield management or if those Cyrix NS parts might actually have more headroom for overvoltage than AMD or Intel parts)
So, in any case you might have some success running a Mendocino Celeron on a 2.8V Slot-1 board and potentially max out a 440FX or LX based system or potentially a BX board with .35 micron PII support, but you'd be better off with an overclocking-specific board that supported 2.2-2.6 volts or so. (though it seems like a lot of Slot 1 and Socket 370 boards are 2.0V only or 2.0 and 2.8V only where voltage tweaking and overclocking support is a lot more common in Socket 370 Coppermine era boards, though it might be that Intel chipset boards more often lack voltage tweaking)
Unlike Super Socket 7, it's also fairly uncommon to have Socket 370 boards that support all Socket 370 CPUs, especially ones that support both Mendocino and Coppermine chips. (that seems a good deal rarer than Tualatin support or boards with unofficial Tualatin compatibility for that matter)
To that end, you can also overvolt Coppermine PIIIs and Celerons quite a bit and push some well beyond the 1.1 GHz limit, and with the much lower power dissipation at stock voltage and same clock rates as competing Athlon and Duron chips you also have some headroom working with CPU cooling solutions from the time, especially given the cross compatibility of Socket 370 and 462 for heatsink mounting hardware. (of course, you want to be careful with power supply limits too and especially avoid stressing a cheap or otherwise non-fail-safe PSU that could take the board with it if it dies)
Bear in mind 5V vs 12V rail demands given emphasis on the 12V rail became common in the P4 era but usually the 5V supply lines are the stressed ones in S370 up to early or even mid generation Athlon XP and MP boards. (12V demanding boards usually have an auxiliary 12V power connector, the typical square 2x2 pin one or sometimes a 1x4-pin HDD style MOLEX connector: the latter was at least used on later revisions of some Athlon MP workstation/server boards that had originally been 20-pin ATX only)
I had my old Celeron 1000 running at 1250 MHz without apparent problems around 9 years ago, but didn't bother stress testing that as I had a Tualatin Pentium IIIs 1.4 I stuck in that same board shortly after for a fast DOS+Win9x gaming build and to play around with overclocking that. (it did fine at 150 MHz FSB 1575 MHz, but Tualatin chips are known for going to 1.6 GHz or so without excessive voltage)
I actually got several cheap Celeron 1000s and 1100s a while ago I want to try to test the limits of at some point. (I may also have some 100 MHz Coppermine PIII 1000s, but I'm not sure ... I may have avoided that due to the price/availability, though I did also get several Tualatin celerons of various speeds to try out: other than my or my Brother's original 1300 Celeron that I'd had running at 1625 MHz for a while, pulled out of storage around the same time as that Celeron 1000)
Though on the K6 family:
I can say from experience on .25 micron AMD K6 or at least K6-2 era parts seem to die instantly at 3.5V or at least near that, potentially board/VRM dependent ... not a slow death or overheating related one either, but instant pop-dead: both a K6-III 400 I accidentally set at 3.5V and then a K6-II 300 I sacrificed to testing after that to confirm what I'd done: it tolerated up to the 3.4V setting without immediate problem, but failed to post at 3.5V and was completely dead after that.
I'll probably delid those AMD CPUs at some point to examine the SMDs on there in case a surface mount capacitor or resistor popped or something. (I'm not actually sure what the SMDs are on K6s, though I think some are used for jumper-selecting certain features, especially with K6-III dies: I've actually wondered if the L2 cache in there can be re-enabled on certain K6-2/400 and 450s that use the same die, but I think at least some of those may be coupling capacitors)
I'm not sure the 180 nm K6 II/III+ are as sensitive, relatively speaking (ie if they tolerate 2.6 or maybe 2.8V without instantly dying), but don't want to sacrifice one of those to find out. Or actually, if a linear scale of voltage has anything to do with it, I guess 2.9 or 3.0V would actually be closer , but I'm not sure that's relevant either. (also later socket 7 boards, or at least my P5A-B has some sort of current limiting or other safety feature that outright refuses to fully power on the board at certain settings: it may have actually prevented me from discovering the 3.5V vulnerability years earlier as it tends to refuse to power on or sends error beeps with K6-2s set much above 2.6V so even momentary POST testing wasn't possible there: though it does tolerate 2.6V and 6x100 as is a more typical maxed out K6-2 550 overclocking configuration)
Or maybe the BIOS has an automatic shut-off safety feature if it detects a K6-2 or late K6 stepping along with certain voltage jumpers.
I also don't really want to risk killing any of my Cyrix or IBM .25 micron CPUs to see if they're as sensitive as AMD ones, even though those are 2.9V rated parts and probably have more generous surface mounted stuff on them if not other aspects. (I may have already run them at 3.5V in the past but didn't document it: I know I went up to at least 3.3V when overclocking my first MII-366 and it's still kicking, though 3.3 didn't do that much in the first place and it already ran OK at 300 MHz at the stock 2.9V and was never very happy at 330: though since discovered it seems to run fine at 2.2V at the stock 250 MHz)